SPOautos

Members
  • Posts

    388
  • Joined

  • Last visited

Posts posted by SPOautos

  1. okay so I've been running my server as it is for almost 3 years now with great success. But today something is going bad that I'm not sure what it is and not sure the best way to trouble shoot hardware issues with Unraid.

     

    I'm running 6.11.5 and its been running fine for several weeks with no issues. I can log into the server from my laptop and on the "dashboard" the processor is at 0%, the Ram shows 0%, GPU shows 0% (usually at least the gpu fan shows rpm), the processor fan shows 0% even though I can see that its actually spinning, the NIC is on the motherboard and shows 0 activity which it usually shows tiny amounts of activity all the time. It shows my shares and dockers but when I go to the "main" tab it doesn't show anything. If I try to update a docker it cant.

     

    I'm thinking maybe its the motherboard died??? But could this also be the processor?  Or something else???

     

    Any direction and/or advice is greatly appreciated!!!

  2. I recently updated to 6.11.5 and prior was on 6.10.3.  When I was on 6.10.3 and the server was at rest, it would run about 1% CPU load. Now it has been running between 5%-15% even with nothing going on. All my apps in the docker show 0% cpu load so I'm not sure what would be causing the CPU to be running.  The load I have at idle is what it normally would be while playing a high definition Plex movie or something like that.  I dont use the server for much more than just storing files and Plex.

     

    With all the apps showing 0% what would be working the CPU?  Looking in my SysLog it looks like theres a lot of errors that I dont have any idea what they are related too.

     

    Thanks!

    tower-syslog-20221207-0419.zip

  3. On 11/30/2022 at 8:40 AM, SPOautos said:

    Hey guys, I'm still on 6.10.3 and everything has been running good, i haven't touched my unraid since I did the 6.10.3 update. BUT I don't want it to get too far behind so considering updating to this latest one. I am NOT very knowledgeable about Unraid.....am I likely going to have a smooth transition or will there probably be issues?

     

    Are there things I can do before updating it to help it have a higher chance of not messing up?  I'll make a backup copy of the flash, update all my apps and plugins, run a parity check......is there anything else that might help before updating?

     

    Just wanted to comment that my transition from 6.10.3 to 6.11.5 seems to have happened smooth and problem free. I updated all my apps and plugins and for some reason my gpu fan went to 100% when I updated my plugins but I rebooted and it was back to normal. Then I ran the update assistant, it suggested I removed NerdPack since its no longer compatible and I did that then updated the OS. It installed then took 2 minutes to do a clean shut down and reboot and its back up and running. Hopefully it'll continue being as stable as it was on 6.10.3.......its been running continuous since I did the 6.10.3 update and I look forward to it continuing to remain that stable going forward.

     

    Thank you for all you guys advice and such. I couldn't have this server without you guys on the forum.....I don't know jack about computers.....I have no idea how I built this from scratch and use it daily hahaha

     

    Much appreciated!!!

    • Like 2
  4. Hello, I have a Radeon amd gpu which literally does nothing but sit there in my server. Its never used and typically the fans don't spin at all or might spin very slowly. However I was just updating my plugins and as soon as I hit the button for them to update the gpu fans spun up to 100% and wont come down. I have two plugins I thought might be a issue.....a fan control plugin and a Radeon plugin. I have deleted both of those plugins and it didn't make any difference.

     

    I'm currently running 6.10.3 but was updating everything in preparation to update 6.11.5

     

    Could this be because my Unraid is out of date and updating to 6.11.5 might correct it? Or should I try to resolve this before I update the OS?

     

    Thanks for any advice!

  5. 2 hours ago, trurl said:

    And if you want to be even more careful, disable Autostart in Settings before rebooting.

     

    As I was updating my dockers and plugins something odd happened. As soon as I clicked to update the plugins my gpu fan ramped up to 100% (older Radeon AMD gpu) and I cant seem to get it back down.  Should I try to resolve this before upgrading to 6.11.5 or is it possible that this is happening because I updated a plugin but still on the old 6.10.3 OS, in which case upgrading my Unraid may resolve the issue?

     

    I have a plugin for fan control and a plugin for Radeon gpu's.....neither seem to have any affect on the fan being at 100% as I have tried disableing and enabling them several time.  *I actually completely removed the fan control and Radeon plugins but that did not help anything*

     

    For the record, my gpu is never used, I have never heard my fan running 100%, it just sits there doing nothing and normally if the fan spins at all its very little.....even right now, its doing nothing except spinning the fan wide open.

     

    Thoughts?

  6. 1 hour ago, lurknyou said:

    Updated from 6.10.3 to 6.11.5 just now and everything went really smooth.  I had to reinstall nvidia plugin and reboot the server again to get plex to be able to use HW transcoding but that isn't out of the norm for me. 

     

    Thanks again team!

     

    Did you do anything before updating to help ensure it went smooth?

  7. Hey guys, I'm still on 6.10.3 and everything has been running good, i haven't touched my unraid since I did the 6.10.3 update. BUT I don't want it to get too far behind so considering updating to this latest one. I am NOT very knowledgeable about Unraid.....am I likely going to have a smooth transition or will there probably be issues?

     

    Are there things I can do before updating it to help it have a higher chance of not messing up?  I'll make a backup copy of the flash, update all my apps and plugins, run a parity check......is there anything else that might help before updating?

  8. 9 hours ago, Arbadacarba said:

    One piece of advice... if unbalance gets stopped before it's finished, it leaves duplicate files in the original location, (It copies and then deletes. 

    I screwed up and stopped it only to discover later that I had a HUGE amount of duplicate files afterwards) so don't try and cancel an unbalance session because it's taking to long.

     

    I still use it but now I'm cautious.

     

    Just to have a general idea, how much data were you moving and how long was it taking? If I'm moving 2TB between two drives should it move at the speed of the hardware or does unbalance slow it way down? I'll probably move 2TB at a time with total of moving 8TB.

     

    Will that take hours, days, weeks???

  9. 5 hours ago, itimpi said:

    Most of the time the upgrades are painless but is always worth taking precautions against anything going wrong.   Make sure you have a backup of your flash drive before attempting the upgrade as you can then easily revert by copying the backup back onto the flash drive.  
     

    It is a good idea to turn of auto-start of the array until you have done an initial check after the upgrade.   Temporarily disabling the Docker and VM services is also not a bad idea.

     

    The one item that most frequent lay causes problems is if you have VMs with hardware pass-through as the IDs of the hardware can change.   In the worst case you can find an ID associated with a GPU now ends up assigned to a HBA.  Make sure that you do not have my VMs set to auto-start until you can check the passed through hardware.

     

    Thank You! It seems to be going smooth so far!

  10. 2 minutes ago, Hoopster said:

    Use the Unbalance plugin.  It will do what you need.

     

    Thank You! I'm on 6.9.2 which has been very stable for me and I'm thinking of upgrading to 6.10.3. Do you think I should hold off, add the new drive, move all the data around, ect before upgrading? Or should I just go ahead and upgrade, make sure its stable, then change the drives and such? I wont get the new drive for a week or so, thought maybe I should go ahead and upgrade the OS while I'm waiting. Do you have any thoughts on that?

  11. I am still on 6.9.2 and my server runs very well with no issues but was thinking of updating to the latest 6.10.3.....but issue is I'm not much of a computer/unraid expert so I'm nervous of it not going well.  Are there certain things (like hardware and applications) that are going to be more likely to cause trouble?

     

    Thanks!

  12. I have four 6TB drives and a 12TB parity. My 6TB drives are nearly full (each between 5.5-5.75 used) and I have a new 12TB drive in the mail to me. Is there a way I can move some of the data from each 6TB drive to the 12TB drive so that they are not so close to maxed out? Maybe move like 2TB from each of the 6TB to the new 12TB drive.  If not, is there a way I can tell Unraid to stop adding data to the 6TB drives and only use the 12TB drive???

     

    Thanks!

  13. 3 hours ago, Squid said:

    It's the mining containers taking up the space.  I don't know how they work well enough to be able to advise if it's possible to store the data outside of the container, etc.  Ask in the respective support threads for them.

     

    But all of my containers added together are only 19.9GB disk utilization, right? How is that so high? Is this just based on a setting somewhere that I'm only allowing a certain amount of space dedicated to containers?

  14. I recently set up a Syslog file that runs and this is what it shows for the 11th, 12th, 13th but I dont think I'm seeing anything in here that would be causing a issue of docker image disk utilization being high. I do have mining running and its been running for months so I doubt that is the cause, it doesn't use much memory, it uses the GPU and half the CPU. I have no idea what all these entries are regarding port numbers and entering blacking/forwarding/disabled states.

     

    Jun 11 00:00:32 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 01:01:14 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 02:00:14 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 03:00:11 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 04:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 04:00:47 Tower root: /var/lib/docker: 12.9 GiB (13833793536 bytes) trimmed on /dev/loop2
    Jun 11 04:00:47 Tower root: /mnt/cache: 331.4 GiB (355793293312 bytes) trimmed on /dev/nvme0n1p1
    Jun 11 04:40:01 Tower root: Fix Common Problems Version 2021.05.03
    Jun 11 05:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 06:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 07:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered blocking state
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered disabled state
    Jun 11 07:44:09 Tower kernel: device veth2de21d8 entered promiscuous mode
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered blocking state
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered forwarding state
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered disabled state
    Jun 11 07:44:09 Tower kernel: eth0: renamed from veth5f484ab
    Jun 11 07:44:09 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2de21d8: link becomes ready
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered blocking state
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered forwarding state
    Jun 11 08:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 08:24:45 Tower kernel: docker0: port 6(veth00d4a84) entered blocking state
    Jun 11 08:24:45 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state
    Jun 11 08:24:45 Tower kernel: device veth00d4a84 entered promiscuous mode
    Jun 11 08:24:46 Tower kernel: eth0: renamed from veth164898c
    Jun 11 08:24:46 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth00d4a84: link becomes ready
    Jun 11 08:24:46 Tower kernel: docker0: port 6(veth00d4a84) entered blocking state
    Jun 11 08:24:46 Tower kernel: docker0: port 6(veth00d4a84) entered forwarding state
    Jun 11 08:53:06 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state
    Jun 11 08:53:06 Tower kernel: veth164898c: renamed from eth0
    Jun 11 08:53:06 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state
    Jun 11 08:53:06 Tower kernel: device veth00d4a84 left promiscuous mode
    Jun 11 08:53:06 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state
    Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered blocking state
    Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state
    Jun 11 08:53:07 Tower kernel: device vetha6de89e entered promiscuous mode
    Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered blocking state
    Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered forwarding state
    Jun 11 08:53:07 Tower kernel: eth0: renamed from veth61b0ff9
    Jun 11 08:53:07 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha6de89e: link becomes ready
    Jun 11 09:00:08 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 09:00:37 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state
    Jun 11 09:00:37 Tower kernel: veth61b0ff9: renamed from eth0
    Jun 11 09:00:37 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state
    Jun 11 09:00:37 Tower kernel: device vetha6de89e left promiscuous mode
    Jun 11 09:00:37 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state
    Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered blocking state
    Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered disabled state
    Jun 11 09:00:37 Tower kernel: device veth3969633 entered promiscuous mode
    Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered blocking state
    Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered forwarding state
    Jun 11 09:00:38 Tower kernel: eth0: renamed from vethfd855b4
    Jun 11 09:00:38 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3969633: link becomes ready
    Jun 11 09:11:08 Tower kernel: docker0: port 6(veth3969633) entered disabled state
    Jun 11 09:11:08 Tower kernel: vethfd855b4: renamed from eth0
    Jun 11 09:11:08 Tower kernel: docker0: port 6(veth3969633) entered disabled state
    Jun 11 09:11:08 Tower kernel: device veth3969633 left promiscuous mode
    Jun 11 09:11:08 Tower kernel: docker0: port 6(veth3969633) entered disabled state
    Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered blocking state
    Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state
    Jun 11 09:11:09 Tower kernel: device veth3cc4ddf entered promiscuous mode
    Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered blocking state
    Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered forwarding state
    Jun 11 09:11:09 Tower kernel: eth0: renamed from veth83942c3
    Jun 11 09:11:09 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3cc4ddf: link becomes ready
    Jun 11 09:19:24 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state
    Jun 11 09:19:24 Tower kernel: veth83942c3: renamed from eth0
    Jun 11 09:19:24 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state
    Jun 11 09:19:24 Tower kernel: device veth3cc4ddf left promiscuous mode
    Jun 11 09:19:24 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state
    Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered blocking state
    Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state
    Jun 11 09:19:25 Tower kernel: device veth6bc6288 entered promiscuous mode
    Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered blocking state
    Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered forwarding state
    Jun 11 09:19:25 Tower kernel: eth0: renamed from vethfac2dbd
    Jun 11 09:19:25 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6bc6288: link becomes ready
    Jun 11 09:22:54 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state
    Jun 11 09:22:54 Tower kernel: vethfac2dbd: renamed from eth0
    Jun 11 09:22:54 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state
    Jun 11 09:22:54 Tower kernel: device veth6bc6288 left promiscuous mode
    Jun 11 09:22:54 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state
    Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered blocking state
    Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered disabled state
    Jun 11 09:22:55 Tower kernel: device vethb4fba36 entered promiscuous mode
    Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered blocking state
    Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered forwarding state
    Jun 11 09:22:55 Tower kernel: eth0: renamed from vethcd82465
    Jun 11 09:22:55 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb4fba36: link becomes ready
    Jun 11 10:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 11:00:08 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 12:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 13:00:01 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 14:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 15:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 16:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 17:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 18:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 19:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 20:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 21:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 22:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 23:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 00:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 01:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 02:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 03:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 04:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 04:00:43 Tower root: /var/lib/docker: 3.1 GiB (3280904192 bytes) trimmed on /dev/loop2
    Jun 12 04:00:43 Tower root: /mnt/cache: 326.5 GiB (350595309568 bytes) trimmed on /dev/nvme0n1p1
    Jun 12 04:40:01 Tower root: Fix Common Problems Version 2021.05.03
    Jun 12 04:40:15 Tower root: FCP Debug Log: root     25980 1198  0.8 3598248 274764 ?      Sl   Jun11 13867:55 ./xmrig --url=xmr-us-east1.nanopool.org:14433 --coin=monero --user=  *I removed the user data from the copy/paste*
    Jun 12 04:40:15 Tower root: Fix Common Problems: Warning: Possible mining software running ** Ignored
    Jun 12 05:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 06:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 07:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 08:00:25 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 09:00:23 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 10:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 11:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 12:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 13:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 14:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 15:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 16:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 17:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 18:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 19:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 20:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 21:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 22:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 23:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 00:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 01:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 02:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 03:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 04:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 04:00:42 Tower root: /var/lib/docker: 3 GiB (3194228736 bytes) trimmed on /dev/loop2
    Jun 13 04:00:42 Tower root: /mnt/cache: 326.7 GiB (350819770368 bytes) trimmed on /dev/nvme0n1p1
    Jun 13 04:40:01 Tower root: Fix Common Problems Version 2021.05.03
    Jun 13 04:40:14 Tower root: FCP Debug Log: root     25980 1198  0.8 3596196 277248 ?      Sl   Jun11 31122:13 ./xmrig --url=xmr-us-east1.nanopool.org:14433 --coin=monero --user=  *I removed the user data from the copy/paste*
    Jun 13 04:40:14 Tower root: Fix Common Problems: Warning: Possible mining software running ** Ignored
    Jun 13 04:40:15 Tower apcupsd[6677]: apcupsd exiting, signal 15
    Jun 13 04:40:15 Tower apcupsd[6677]: apcupsd shutdown succeeded
    Jun 13 04:40:18 Tower apcupsd[11753]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded
    Jun 13 04:40:18 Tower apcupsd[11753]: NIS server startup succeeded
    Jun 13 05:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 06:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 07:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 08:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 08:56:11 Tower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin update unassigned.devices.plg
    Jun 13 08:56:11 Tower root: plugin: running: anonymous
    Jun 13 08:56:11 Tower root: plugin: creating: /boot/config/plugins/unassigned.devices/unassigned.devices-2021.06.11.tgz - downloading from URL https://github.com/dlandon/unassigned.devices/raw/master/unassigned.devices-2021.06.11.tgz
    Jun 13 08:56:12 Tower root: plugin: checking: /boot/config/plugins/unassigned.devices/unassigned.devices-2021.06.11.tgz - MD5
    Jun 13 08:56:12 Tower root: plugin: creating: /tmp/start_unassigned_devices - from INLINE content
    Jun 13 08:56:12 Tower root: plugin: setting: /tmp/start_unassigned_devices - mode to 0770
    Jun 13 08:56:12 Tower root: plugin: skipping: /boot/config/plugins/unassigned.devices/unassigned.devices.cfg already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /boot/config/plugins/unassigned.devices/samba_mount.cfg already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /boot/config/plugins/unassigned.devices/iso_mount.cfg already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/smb-settings.conf already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/config/smb-extra.conf already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/add-smb-extra already exists
    Jun 13 08:56:12 Tower root: plugin: setting: /tmp/unassigned.devices/add-smb-extra - mode to 0770
    Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/remove-smb-extra already exists
    Jun 13 08:56:12 Tower root: plugin: setting: /tmp/unassigned.devices/remove-smb-extra - mode to 0770
    Jun 13 08:56:12 Tower root: plugin: running: anonymous
    Jun 13 09:00:45 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 10:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    </pre>
    </body>
    </html>

     

     

  15. Also, if it makes a difference, everything is up to date....6.9.2 with all containers and plugins updated.  I'm pretty sure this is going to keep getting higher and higher over a couple day period. I'm pretty sure this happened a couple weeks ago and eventually made my server crash (well it crashed and I never found out why and it was running fine when it booted back up, and I just *think* it must have been because of this).

     

    I've looked through the log but I don't see anything jumping out at me....but I also don't know enough about what all is going on behind the scenes to know what everything means. I'm not a IT/Computer guy.....We have a children's clothing store and I do construction  lol. To be honest I don't even know how I got all this working and being useful except that the forum community is awesome.

     

     

  16. Hey guys, on the Dashboard under Memory then Docker, its a 71% and growing, its normally a very low number. I attached a diagnostic, can anyone tell what's causing it? I also have a notification of "Warning [TOWER] - Docker image disk utilization of 71%".  When I look under the Docker tab and advanced view, none of the containers are using a lot of memory.

     

    Thanks...

    tower-diagnostics-20210613-0439.zip

  17. On 4/28/2021 at 2:18 PM, ijuarez said:

     

    I did not get notified of your response, I will try that thank you.

     

    Were you ever able to get this resolved? I seem to be having a similar issue after updating. I cant find the server from the other devices in the Workgroup. Wondering what fixed it for you. Similarly my settings have "yes" for Workgroup.

  18. 15 hours ago, Steace said:

    For those who have dual Intel Xeon E5 2650-3 (or maybe others Xeon CPU too) I remove the first 6 cores and their threads and now I have 5000-5500H/s instead of ~3500H/s or less on anything else I tried:

    What I tried = 1 more core/threads plus or minus, all of them, only the first core/thread removed like the OP said, and more.

    p.s. I also use the OP script and this as additional arguments:

    
    --no-color --asm intel --randomx-1gb-pages

     

    If that can help anyone 😎

     

    Thanks for putting this docker image in CA 😃, it's really appreciated.

    I saw you tell people that it's not worth it to put GPU support for it, but ethereum have gone beyond the 4gb mark,lots of people still have 4gb GPU it may be nice to have the possibility to mine monero since we can't do ethereum anymore (maybe with a hack, but probably not recommended). It may not be that profitable, but hey, were doing that for fun mostly.

    The decision is yours to take, that was just my 2 cents ;) 

     

    I have a xeon e5-2690v3 and had --asm intel and --randomx-1gb-pages but I could never seem to get the 1gb pages to work. When I removed those two arguements my H/s actually went UP about 200 H/s. I have no idea why.

     

    Have you done before/after to see what difference they are making? Also, can you tell in your container log that your actually getting 1gb pages?

    • Like 1
  19. 8 hours ago, G Speed said:

     

    * ABOUT XMRig/6.10.0 gcc/9.3.0
    * LIBS libuv/1.34.2 OpenSSL/1.1.1f hwloc/2.1.0
    * HUGE PAGES supported
    * 1GB PAGES supported
    * CPU Intel(R) Core(TM) i3-8100 CPU @ 3.60GHz (1) 64-bit AES
    L2:1.0 MB L3:6.0 MB 4C/4T NUMA:1
    * MEMORY 11.2/15.5 GB (72%)

     

    Without XMrig running, i'm only at 15% ram utilization
    So yeah.. something wrong

     

     

    My XMrig log was also showing real high ram usage....like 26GB out of 32GB but it was related to me having Plex transcoding in ram and I had a large chunk of my ram allocated for that purpose so XMrig was seeing that and considering it as being used eventhough my actual usage was only like 7GB. 

     

    HOWEVER, that said, even after I made some changes and XMrig only sees my usage as like 7GB out of 32GB, I still cant use the 1GB pages. For some reason RandomX 1GB pages fails. Not sure why.

     

    Anyway, are you doing Plex transcoding in ram? If so, pull up the script, click edit, see how much ram you have allocated to it.....I had way more than I needed allocated so I changed it to like 4GB then restarted my array so it would change (just running the script didnt seem to work but stopping then starting the array did change how much was allocated). 

    • Like 1
  20. Mine has been working great for months but sometime in March it stopped connecting. I set up the 2 way authentication for the web login but something else must be messed up. I've looked through everything and it all seems good but in Unraid when you go to the WebUI and login it says "Unable to connect to destination for 10 days".  

     

    Any ideas on where I should look?

     

    I restarted the container and here is a log from where it started back up....maybe it has some useful info

     

    [services.d] starting app...
    [app] starting CrashPlan for Small Business...
    07/04/2021 11:19:26 X DAMAGE available on display, using it for polling hints.
    07/04/2021 11:19:26 To disable this behavior use: '-noxdamage'
    07/04/2021 11:19:26
    07/04/2021 11:19:26 Most compositing window managers like 'compiz' or 'beryl'
    07/04/2021 11:19:26 cause X DAMAGE to fail, and so you may not see any screen
    07/04/2021 11:19:26 updates via VNC. Either disable 'compiz' (recommended) or
    07/04/2021 11:19:26 supply the x11vnc '-noxdamage' command line option.
    07/04/2021 11:19:26 X COMPOSITE available on display, using it for window polling.
    07/04/2021 11:19:26 To disable this behavior use: '-noxcomposite'
    07/04/2021 11:19:26
    07/04/2021 11:19:26 Wireframing: -wireframe mode is in effect for window moves.
    07/04/2021 11:19:26 If this yields undesired behavior (poor response, painting
    07/04/2021 11:19:26 errors, etc) it may be disabled:
    07/04/2021 11:19:26 - use '-nowf' to disable wireframing completely.
    07/04/2021 11:19:26 - use '-nowcr' to disable the Copy Rectangle after the
    07/04/2021 11:19:26 moved window is released in the new position.
    07/04/2021 11:19:26 Also see the -help entry for tuning parameters.
    07/04/2021 11:19:26 You can press 3 Alt_L's (Left "Alt" key) in a row to
    07/04/2021 11:19:26 repaint the screen, also see the -fixscreen option for
    07/04/2021 11:19:26 periodic repaints.
    07/04/2021 11:19:26 GrabServer control via XTEST.
    [services.d] done.
    07/04/2021 11:19:26
    07/04/2021 11:19:26 Scroll Detection: -scrollcopyrect mode is in effect to
    07/04/2021 11:19:26 use RECORD extension to try to detect scrolling windows
    07/04/2021 11:19:26 (induced by either user keystroke or mouse input).
    07/04/2021 11:19:26 If this yields undesired behavior (poor response, painting
    07/04/2021 11:19:26 errors, etc) it may be disabled via: '-noscr'
    07/04/2021 11:19:26 Also see the -help entry for tuning parameters.
    07/04/2021 11:19:26 You can press 3 Alt_L's (Left "Alt" key) in a row to
    07/04/2021 11:19:26 repaint the screen, also see the -fixscreen option for
    07/04/2021 11:19:26 periodic repaints.
    07/04/2021 11:19:26
    07/04/2021 11:19:26 XKEYBOARD: number of keysyms per keycode 7 is greater
    07/04/2021 11:19:26 than 4 and 51 keysyms are mapped above 4.
    07/04/2021 11:19:26 Automatically switching to -xkb mode.
    07/04/2021 11:19:26 If this makes the key mapping worse you can
    07/04/2021 11:19:26 disable it with the "-noxkb" option.
    07/04/2021 11:19:26 Also, remember "-remap DEAD" for accenting characters.
    07/04/2021 11:19:26
    07/04/2021 11:19:26 X FBPM extension not supported.
    07/04/2021 11:19:26 X display is not capable of DPMS.
    07/04/2021 11:19:26 --------------------------------------------------------
    07/04/2021 11:19:26
    07/04/2021 11:19:26 Default visual ID: 0x21
    07/04/2021 11:19:26 Read initial data from X display into framebuffer.
    07/04/2021 11:19:26 initialize_screen: fb_depth/fb_bpp/fb_Bpl 24/32/5120
    07/04/2021 11:19:26
    07/04/2021 11:19:26 X display :0 is 32bpp depth=24 true color
    07/04/2021 11:19:26
    07/04/2021 11:19:26 Listening for VNC connections on TCP port 5900
    07/04/2021 11:19:26
    07/04/2021 11:19:26 Xinerama is present and active (e.g. multi-head).
    07/04/2021 11:19:26 Xinerama: number of sub-screens: 1
    07/04/2021 11:19:26 Xinerama: no blackouts needed (only one sub-screen)
    07/04/2021 11:19:26
    07/04/2021 11:19:26 fb read rate: 861 MB/sec
    07/04/2021 11:19:26 fast read: reset -wait ms to: 10
    07/04/2021 11:19:26 fast read: reset -defer ms to: 10
    07/04/2021 11:19:26 The X server says there are 10 mouse buttons.
    07/04/2021 11:19:26 screen setup finished.
    07/04/2021 11:19:26

    The VNC desktop is: bceeb4c88123:0

    0

    ******************************************************************************
    Have you tried the x11vnc '-ncache' VNC client-side pixel caching feature yet?

    The scheme stores pixel data offscreen on the VNC viewer side for faster
    retrieval. It should work with any VNC viewer. Try it by running:

    x11vnc -ncache 10 ...

    One can also add -ncache_cr for smooth 'copyrect' window motion.
    More info: http://www.karlrunge.com/x11vnc/faq.html#faq-client-caching

    07/04/2021 11:20:04 Got connection from client 127.0.0.1
    07/04/2021 11:20:04 other clients:
    07/04/2021 11:20:04 Got 'ws' WebSockets handshake
    07/04/2021 11:20:04 Got protocol: binary
    07/04/2021 11:20:04 - webSocketsHandshake: using binary/raw encoding
    07/04/2021 11:20:04 - WebSockets client version hybi-13
    07/04/2021 11:20:04 Disabled X server key autorepeat.
    07/04/2021 11:20:04 to force back on run: 'xset r on' (3 times)
    07/04/2021 11:20:04 incr accepted_client=1 for 127.0.0.1:45350 sock=10
    07/04/2021 11:20:04 Client Protocol Version 3.8
    07/04/2021 11:20:04 Protocol version sent 3.8, using 3.8
    07/04/2021 11:20:04 rfbProcessClientSecurityType: executing handler for type 1
    07/04/2021 11:20:04 rfbProcessClientSecurityType: returning securityResult for client rfb version >= 3.8
    07/04/2021 11:20:04 Pixel format for client 127.0.0.1:
    07/04/2021 11:20:04 32 bpp, depth 24, little endian
    07/04/2021 11:20:04 true colour: max r 255 g 255 b 255, shift r 16 g 8 b 0
    07/04/2021 11:20:04 no translation needed
    07/04/2021 11:20:04 Enabling NewFBSize protocol extension for client 127.0.0.1
    07/04/2021 11:20:04 Enabling full-color cursor updates for client 127.0.0.1
    07/04/2021 11:20:04 Using image quality level 6 for client 127.0.0.1
    07/04/2021 11:20:04 Using JPEG subsampling 0, Q79 for client 127.0.0.1
    07/04/2021 11:20:04 Using compression level 9 for client 127.0.0.1
    07/04/2021 11:20:04 Enabling LastRect protocol extension for client 127.0.0.1
    07/04/2021 11:20:04 rfbProcessClientNormalMessage: ignoring unsupported encoding type Enc(0xFFFFFECC)
    07/04/2021 11:20:04 Using tight encoding for client 127.0.0.1
    07/04/2021 11:20:05 client_set_net: 127.0.0.1 0.0000
    07/04/2021 11:20:05 created xdamage object: 0x40002c
    07/04/2021 11:20:05 copy_tiles: allocating first_line at size 41
    07/04/2021 11:20:05 client_set_net: 127.0.0.1 0.0000
    07/04/2021 11:20:05 created xdamage object: 0x40002c
    07/04/2021 11:20:05 copy_tiles: allocating first_line at size 41
    07/04/2021 11:20:14 created selwin: 0x40002d
    07/04/2021 11:20:14 called initialize_xfixes()
    07/04/2021 11:20:14 created selwin: 0x40002d
    07/04/2021 11:20:14 called initialize_xfixes()
    07/04/2021 11:20:14 client 1 network rate 523.3 KB/sec (18317.5 eff KB/sec)
    07/04/2021 11:20:14 client 1 latency: 1.8 ms
    07/04/2021 11:20:14 dt1: 0.0061, dt2: 0.0244 dt3: 0.0018 bytes: 15462
    07/04/2021 11:20:14 link_rate: LR_LAN - 1 ms, 523 KB/s

    07/04/2021 11:23:19 got closure, reason 1001
    07/04/2021 11:23:19 rfbProcessClientNormalMessage: read: Connection reset by peer
    07/04/2021 11:23:19 client_count: 0
    07/04/2021 11:23:19 Restored X server key autorepeat to: 1
    07/04/2021 11:23:19 Client 127.0.0.1 gone
    07/04/2021 11:23:19 Statistics events Transmit/ RawEquiv ( saved)
    07/04/2021 11:23:19 ServerCutText : 1 | 8/ 8 ( 0.0%)
    07/04/2021 11:23:19 FramebufferUpdate : 234 | 0/ 0 ( 0.0%)
    07/04/2021 11:23:19 LastRect : 21 | 252/ 252 ( 0.0%)
    07/04/2021 11:23:19 tight : 430 | 313154/ 8603816 ( 96.4%)
    07/04/2021 11:23:19 RichCursor : 1 | 1374/ 1374 ( 0.0%)
    07/04/2021 11:23:19 TOTALS : 687 | 314788/ 8605450 ( 96.3%)
    07/04/2021 11:23:19 Statistics events Received/ RawEquiv ( saved)
    07/04/2021 11:23:19 KeyEvent : 30 | 240/ 240 ( 0.0%)
    07/04/2021 11:23:19 PointerEvent : 92 | 552/ 552 ( 0.0%)
    07/04/2021 11:23:19 FramebufferUpdate : 235 | 2350/ 2350 ( 0.0%)
    07/04/2021 11:23:19 SetEncodings : 1 | 56/ 56 ( 0.0%)
    07/04/2021 11:23:19 SetPixelFormat : 1 | 20/ 20 ( 0.0%)
    07/04/2021 11:23:19 TOTALS : 359 | 3218/ 3218 ( 0.0%)
    07/04/2021 11:23:19 destroyed xdamage object: 0x40002c
    07/04/2021 11:23:19 Got connection from client 127.0.0.1
    07/04/2021 11:23:19 other clients:
    07/04/2021 11:23:19 Got 'ws' WebSockets handshake
    07/04/2021 11:23:19 Got protocol: binary
    07/04/2021 11:23:19 - webSocketsHandshake: using binary/raw encoding
    07/04/2021 11:23:19 - WebSockets client version hybi-13
    07/04/2021 11:23:19 Disabled X server key autorepeat.
    07/04/2021 11:23:19 to force back on run: 'xset r on' (3 times)
    07/04/2021 11:23:19 incr accepted_client=2 for 127.0.0.1:49534 sock=10
    07/04/2021 11:23:19 Client Protocol Version 3.8
    07/04/2021 11:23:19 Protocol version sent 3.8, using 3.8
    07/04/2021 11:23:19 rfbProcessClientSecurityType: executing handler for type 1
    07/04/2021 11:23:19 rfbProcessClientSecurityType: returning securityResult for client rfb version >= 3.8
    07/04/2021 11:23:19 Pixel format for client 127.0.0.1:
    07/04/2021 11:23:19 32 bpp, depth 24, little endian
    07/04/2021 11:23:19 true colour: max r 255 g 255 b 255, shift r 16 g 8 b 0
    07/04/2021 11:23:19 no translation needed
    07/04/2021 11:23:19 Enabling NewFBSize protocol extension for client 127.0.0.1
    07/04/2021 11:23:19 Enabling full-color cursor updates for client 127.0.0.1
    07/04/2021 11:23:19 Using image quality level 6 for client 127.0.0.1
    07/04/2021 11:23:19 Using JPEG subsampling 0, Q79 for client 127.0.0.1
    07/04/2021 11:23:19 Using compression level 9 for client 127.0.0.1
    07/04/2021 11:23:19 Enabling LastRect protocol extension for client 127.0.0.1
    07/04/2021 11:23:19 rfbProcessClientNormalMessage: ignoring unsupported encoding type Enc(0xFFFFFECC)
    07/04/2021 11:23:19 Using tight encoding for client 127.0.0.1
    07/04/2021 11:23:19 client_set_net: 127.0.0.1 0.0000
    07/04/2021 11:23:19 created xdamage object: 0x40002e
    07/04/2021 11:23:24 client 2 network rate 663.8 KB/sec (17817.9 eff KB/sec)
    07/04/2021 11:23:24 client 2 latency: 2.4 ms
    07/04/2021 11:23:24 dt1: 0.0115, dt2: 0.0210 dt3: 0.0024 bytes: 20780
    07/04/2021 11:23:24 link_rate: LR_LAN - 2 ms, 663 KB/s