Jump to content

SPOautos

Members
  • Posts

    389
  • Joined

  • Last visited

Posts posted by SPOautos

  1. I did the Update Assistant and it said there are no issues and I should be good to update but I've never skipped several versions before so just want to make sure that it should be okay. All of the plugins and apps are up to date, everything runs good. Just updating to get current again. I guess I still have update PTSD from several years ago when a update went real bad and I couldn't boot up.

  2. okay so I've been running my server as it is for almost 3 years now with great success. But today something is going bad that I'm not sure what it is and not sure the best way to trouble shoot hardware issues with Unraid.

     

    I'm running 6.11.5 and its been running fine for several weeks with no issues. I can log into the server from my laptop and on the "dashboard" the processor is at 0%, the Ram shows 0%, GPU shows 0% (usually at least the gpu fan shows rpm), the processor fan shows 0% even though I can see that its actually spinning, the NIC is on the motherboard and shows 0 activity which it usually shows tiny amounts of activity all the time. It shows my shares and dockers but when I go to the "main" tab it doesn't show anything. If I try to update a docker it cant.

     

    I'm thinking maybe its the motherboard died??? But could this also be the processor?  Or something else???

     

    Any direction and/or advice is greatly appreciated!!!

  3. I recently updated to 6.11.5 and prior was on 6.10.3.  When I was on 6.10.3 and the server was at rest, it would run about 1% CPU load. Now it has been running between 5%-15% even with nothing going on. All my apps in the docker show 0% cpu load so I'm not sure what would be causing the CPU to be running.  The load I have at idle is what it normally would be while playing a high definition Plex movie or something like that.  I dont use the server for much more than just storing files and Plex.

     

    With all the apps showing 0% what would be working the CPU?  Looking in my SysLog it looks like theres a lot of errors that I dont have any idea what they are related too.

     

    Thanks!

    tower-syslog-20221207-0419.zip

  4. On 11/30/2022 at 8:40 AM, SPOautos said:

    Hey guys, I'm still on 6.10.3 and everything has been running good, i haven't touched my unraid since I did the 6.10.3 update. BUT I don't want it to get too far behind so considering updating to this latest one. I am NOT very knowledgeable about Unraid.....am I likely going to have a smooth transition or will there probably be issues?

     

    Are there things I can do before updating it to help it have a higher chance of not messing up?  I'll make a backup copy of the flash, update all my apps and plugins, run a parity check......is there anything else that might help before updating?

     

    Just wanted to comment that my transition from 6.10.3 to 6.11.5 seems to have happened smooth and problem free. I updated all my apps and plugins and for some reason my gpu fan went to 100% when I updated my plugins but I rebooted and it was back to normal. Then I ran the update assistant, it suggested I removed NerdPack since its no longer compatible and I did that then updated the OS. It installed then took 2 minutes to do a clean shut down and reboot and its back up and running. Hopefully it'll continue being as stable as it was on 6.10.3.......its been running continuous since I did the 6.10.3 update and I look forward to it continuing to remain that stable going forward.

     

    Thank you for all you guys advice and such. I couldn't have this server without you guys on the forum.....I don't know jack about computers.....I have no idea how I built this from scratch and use it daily hahaha

     

    Much appreciated!!!

    • Like 2
  5. Hello, I have a Radeon amd gpu which literally does nothing but sit there in my server. Its never used and typically the fans don't spin at all or might spin very slowly. However I was just updating my plugins and as soon as I hit the button for them to update the gpu fans spun up to 100% and wont come down. I have two plugins I thought might be a issue.....a fan control plugin and a Radeon plugin. I have deleted both of those plugins and it didn't make any difference.

     

    I'm currently running 6.10.3 but was updating everything in preparation to update 6.11.5

     

    Could this be because my Unraid is out of date and updating to 6.11.5 might correct it? Or should I try to resolve this before I update the OS?

     

    Thanks for any advice!

  6. 2 hours ago, trurl said:

    And if you want to be even more careful, disable Autostart in Settings before rebooting.

     

    As I was updating my dockers and plugins something odd happened. As soon as I clicked to update the plugins my gpu fan ramped up to 100% (older Radeon AMD gpu) and I cant seem to get it back down.  Should I try to resolve this before upgrading to 6.11.5 or is it possible that this is happening because I updated a plugin but still on the old 6.10.3 OS, in which case upgrading my Unraid may resolve the issue?

     

    I have a plugin for fan control and a plugin for Radeon gpu's.....neither seem to have any affect on the fan being at 100% as I have tried disableing and enabling them several time.  *I actually completely removed the fan control and Radeon plugins but that did not help anything*

     

    For the record, my gpu is never used, I have never heard my fan running 100%, it just sits there doing nothing and normally if the fan spins at all its very little.....even right now, its doing nothing except spinning the fan wide open.

     

    Thoughts?

  7. 1 hour ago, lurknyou said:

    Updated from 6.10.3 to 6.11.5 just now and everything went really smooth.  I had to reinstall nvidia plugin and reboot the server again to get plex to be able to use HW transcoding but that isn't out of the norm for me. 

     

    Thanks again team!

     

    Did you do anything before updating to help ensure it went smooth?

  8. Hey guys, I'm still on 6.10.3 and everything has been running good, i haven't touched my unraid since I did the 6.10.3 update. BUT I don't want it to get too far behind so considering updating to this latest one. I am NOT very knowledgeable about Unraid.....am I likely going to have a smooth transition or will there probably be issues?

     

    Are there things I can do before updating it to help it have a higher chance of not messing up?  I'll make a backup copy of the flash, update all my apps and plugins, run a parity check......is there anything else that might help before updating?

  9. 9 hours ago, Arbadacarba said:

    One piece of advice... if unbalance gets stopped before it's finished, it leaves duplicate files in the original location, (It copies and then deletes. 

    I screwed up and stopped it only to discover later that I had a HUGE amount of duplicate files afterwards) so don't try and cancel an unbalance session because it's taking to long.

     

    I still use it but now I'm cautious.

     

    Just to have a general idea, how much data were you moving and how long was it taking? If I'm moving 2TB between two drives should it move at the speed of the hardware or does unbalance slow it way down? I'll probably move 2TB at a time with total of moving 8TB.

     

    Will that take hours, days, weeks???

  10. 5 hours ago, itimpi said:

    Most of the time the upgrades are painless but is always worth taking precautions against anything going wrong.   Make sure you have a backup of your flash drive before attempting the upgrade as you can then easily revert by copying the backup back onto the flash drive.  
     

    It is a good idea to turn of auto-start of the array until you have done an initial check after the upgrade.   Temporarily disabling the Docker and VM services is also not a bad idea.

     

    The one item that most frequent lay causes problems is if you have VMs with hardware pass-through as the IDs of the hardware can change.   In the worst case you can find an ID associated with a GPU now ends up assigned to a HBA.  Make sure that you do not have my VMs set to auto-start until you can check the passed through hardware.

     

    Thank You! It seems to be going smooth so far!

  11. 2 minutes ago, Hoopster said:

    Use the Unbalance plugin.  It will do what you need.

     

    Thank You! I'm on 6.9.2 which has been very stable for me and I'm thinking of upgrading to 6.10.3. Do you think I should hold off, add the new drive, move all the data around, ect before upgrading? Or should I just go ahead and upgrade, make sure its stable, then change the drives and such? I wont get the new drive for a week or so, thought maybe I should go ahead and upgrade the OS while I'm waiting. Do you have any thoughts on that?

  12. I am still on 6.9.2 and my server runs very well with no issues but was thinking of updating to the latest 6.10.3.....but issue is I'm not much of a computer/unraid expert so I'm nervous of it not going well.  Are there certain things (like hardware and applications) that are going to be more likely to cause trouble?

     

    Thanks!

  13. I have four 6TB drives and a 12TB parity. My 6TB drives are nearly full (each between 5.5-5.75 used) and I have a new 12TB drive in the mail to me. Is there a way I can move some of the data from each 6TB drive to the 12TB drive so that they are not so close to maxed out? Maybe move like 2TB from each of the 6TB to the new 12TB drive.  If not, is there a way I can tell Unraid to stop adding data to the 6TB drives and only use the 12TB drive???

     

    Thanks!

  14. 3 hours ago, Squid said:

    It's the mining containers taking up the space.  I don't know how they work well enough to be able to advise if it's possible to store the data outside of the container, etc.  Ask in the respective support threads for them.

     

    But all of my containers added together are only 19.9GB disk utilization, right? How is that so high? Is this just based on a setting somewhere that I'm only allowing a certain amount of space dedicated to containers?

  15. I recently set up a Syslog file that runs and this is what it shows for the 11th, 12th, 13th but I dont think I'm seeing anything in here that would be causing a issue of docker image disk utilization being high. I do have mining running and its been running for months so I doubt that is the cause, it doesn't use much memory, it uses the GPU and half the CPU. I have no idea what all these entries are regarding port numbers and entering blacking/forwarding/disabled states.

     

    Jun 11 00:00:32 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 01:01:14 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 02:00:14 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 03:00:11 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 04:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 04:00:47 Tower root: /var/lib/docker: 12.9 GiB (13833793536 bytes) trimmed on /dev/loop2
    Jun 11 04:00:47 Tower root: /mnt/cache: 331.4 GiB (355793293312 bytes) trimmed on /dev/nvme0n1p1
    Jun 11 04:40:01 Tower root: Fix Common Problems Version 2021.05.03
    Jun 11 05:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 06:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 07:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered blocking state
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered disabled state
    Jun 11 07:44:09 Tower kernel: device veth2de21d8 entered promiscuous mode
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered blocking state
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered forwarding state
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered disabled state
    Jun 11 07:44:09 Tower kernel: eth0: renamed from veth5f484ab
    Jun 11 07:44:09 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2de21d8: link becomes ready
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered blocking state
    Jun 11 07:44:09 Tower kernel: docker0: port 5(veth2de21d8) entered forwarding state
    Jun 11 08:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 08:24:45 Tower kernel: docker0: port 6(veth00d4a84) entered blocking state
    Jun 11 08:24:45 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state
    Jun 11 08:24:45 Tower kernel: device veth00d4a84 entered promiscuous mode
    Jun 11 08:24:46 Tower kernel: eth0: renamed from veth164898c
    Jun 11 08:24:46 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth00d4a84: link becomes ready
    Jun 11 08:24:46 Tower kernel: docker0: port 6(veth00d4a84) entered blocking state
    Jun 11 08:24:46 Tower kernel: docker0: port 6(veth00d4a84) entered forwarding state
    Jun 11 08:53:06 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state
    Jun 11 08:53:06 Tower kernel: veth164898c: renamed from eth0
    Jun 11 08:53:06 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state
    Jun 11 08:53:06 Tower kernel: device veth00d4a84 left promiscuous mode
    Jun 11 08:53:06 Tower kernel: docker0: port 6(veth00d4a84) entered disabled state
    Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered blocking state
    Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state
    Jun 11 08:53:07 Tower kernel: device vetha6de89e entered promiscuous mode
    Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered blocking state
    Jun 11 08:53:07 Tower kernel: docker0: port 6(vetha6de89e) entered forwarding state
    Jun 11 08:53:07 Tower kernel: eth0: renamed from veth61b0ff9
    Jun 11 08:53:07 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha6de89e: link becomes ready
    Jun 11 09:00:08 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 09:00:37 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state
    Jun 11 09:00:37 Tower kernel: veth61b0ff9: renamed from eth0
    Jun 11 09:00:37 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state
    Jun 11 09:00:37 Tower kernel: device vetha6de89e left promiscuous mode
    Jun 11 09:00:37 Tower kernel: docker0: port 6(vetha6de89e) entered disabled state
    Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered blocking state
    Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered disabled state
    Jun 11 09:00:37 Tower kernel: device veth3969633 entered promiscuous mode
    Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered blocking state
    Jun 11 09:00:37 Tower kernel: docker0: port 6(veth3969633) entered forwarding state
    Jun 11 09:00:38 Tower kernel: eth0: renamed from vethfd855b4
    Jun 11 09:00:38 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3969633: link becomes ready
    Jun 11 09:11:08 Tower kernel: docker0: port 6(veth3969633) entered disabled state
    Jun 11 09:11:08 Tower kernel: vethfd855b4: renamed from eth0
    Jun 11 09:11:08 Tower kernel: docker0: port 6(veth3969633) entered disabled state
    Jun 11 09:11:08 Tower kernel: device veth3969633 left promiscuous mode
    Jun 11 09:11:08 Tower kernel: docker0: port 6(veth3969633) entered disabled state
    Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered blocking state
    Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state
    Jun 11 09:11:09 Tower kernel: device veth3cc4ddf entered promiscuous mode
    Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered blocking state
    Jun 11 09:11:09 Tower kernel: docker0: port 6(veth3cc4ddf) entered forwarding state
    Jun 11 09:11:09 Tower kernel: eth0: renamed from veth83942c3
    Jun 11 09:11:09 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3cc4ddf: link becomes ready
    Jun 11 09:19:24 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state
    Jun 11 09:19:24 Tower kernel: veth83942c3: renamed from eth0
    Jun 11 09:19:24 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state
    Jun 11 09:19:24 Tower kernel: device veth3cc4ddf left promiscuous mode
    Jun 11 09:19:24 Tower kernel: docker0: port 6(veth3cc4ddf) entered disabled state
    Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered blocking state
    Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state
    Jun 11 09:19:25 Tower kernel: device veth6bc6288 entered promiscuous mode
    Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered blocking state
    Jun 11 09:19:25 Tower kernel: docker0: port 6(veth6bc6288) entered forwarding state
    Jun 11 09:19:25 Tower kernel: eth0: renamed from vethfac2dbd
    Jun 11 09:19:25 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6bc6288: link becomes ready
    Jun 11 09:22:54 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state
    Jun 11 09:22:54 Tower kernel: vethfac2dbd: renamed from eth0
    Jun 11 09:22:54 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state
    Jun 11 09:22:54 Tower kernel: device veth6bc6288 left promiscuous mode
    Jun 11 09:22:54 Tower kernel: docker0: port 6(veth6bc6288) entered disabled state
    Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered blocking state
    Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered disabled state
    Jun 11 09:22:55 Tower kernel: device vethb4fba36 entered promiscuous mode
    Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered blocking state
    Jun 11 09:22:55 Tower kernel: docker0: port 6(vethb4fba36) entered forwarding state
    Jun 11 09:22:55 Tower kernel: eth0: renamed from vethcd82465
    Jun 11 09:22:55 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb4fba36: link becomes ready
    Jun 11 10:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 11:00:08 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 12:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 13:00:01 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 14:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 15:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 16:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 17:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 18:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 19:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 20:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 21:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 22:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 11 23:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 00:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 01:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 02:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 03:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 04:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 04:00:43 Tower root: /var/lib/docker: 3.1 GiB (3280904192 bytes) trimmed on /dev/loop2
    Jun 12 04:00:43 Tower root: /mnt/cache: 326.5 GiB (350595309568 bytes) trimmed on /dev/nvme0n1p1
    Jun 12 04:40:01 Tower root: Fix Common Problems Version 2021.05.03
    Jun 12 04:40:15 Tower root: FCP Debug Log: root     25980 1198  0.8 3598248 274764 ?      Sl   Jun11 13867:55 ./xmrig --url=xmr-us-east1.nanopool.org:14433 --coin=monero --user=  *I removed the user data from the copy/paste*
    Jun 12 04:40:15 Tower root: Fix Common Problems: Warning: Possible mining software running ** Ignored
    Jun 12 05:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 06:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 07:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 08:00:25 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 09:00:23 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 10:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 11:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 12:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 13:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 14:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 15:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 16:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 17:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 18:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 19:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 20:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 21:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 22:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 12 23:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 00:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 01:00:09 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 02:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 03:00:07 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 04:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 04:00:42 Tower root: /var/lib/docker: 3 GiB (3194228736 bytes) trimmed on /dev/loop2
    Jun 13 04:00:42 Tower root: /mnt/cache: 326.7 GiB (350819770368 bytes) trimmed on /dev/nvme0n1p1
    Jun 13 04:40:01 Tower root: Fix Common Problems Version 2021.05.03
    Jun 13 04:40:14 Tower root: FCP Debug Log: root     25980 1198  0.8 3596196 277248 ?      Sl   Jun11 31122:13 ./xmrig --url=xmr-us-east1.nanopool.org:14433 --coin=monero --user=  *I removed the user data from the copy/paste*
    Jun 13 04:40:14 Tower root: Fix Common Problems: Warning: Possible mining software running ** Ignored
    Jun 13 04:40:15 Tower apcupsd[6677]: apcupsd exiting, signal 15
    Jun 13 04:40:15 Tower apcupsd[6677]: apcupsd shutdown succeeded
    Jun 13 04:40:18 Tower apcupsd[11753]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded
    Jun 13 04:40:18 Tower apcupsd[11753]: NIS server startup succeeded
    Jun 13 05:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 06:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 07:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 08:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 08:56:11 Tower emhttpd: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin update unassigned.devices.plg
    Jun 13 08:56:11 Tower root: plugin: running: anonymous
    Jun 13 08:56:11 Tower root: plugin: creating: /boot/config/plugins/unassigned.devices/unassigned.devices-2021.06.11.tgz - downloading from URL https://github.com/dlandon/unassigned.devices/raw/master/unassigned.devices-2021.06.11.tgz
    Jun 13 08:56:12 Tower root: plugin: checking: /boot/config/plugins/unassigned.devices/unassigned.devices-2021.06.11.tgz - MD5
    Jun 13 08:56:12 Tower root: plugin: creating: /tmp/start_unassigned_devices - from INLINE content
    Jun 13 08:56:12 Tower root: plugin: setting: /tmp/start_unassigned_devices - mode to 0770
    Jun 13 08:56:12 Tower root: plugin: skipping: /boot/config/plugins/unassigned.devices/unassigned.devices.cfg already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /boot/config/plugins/unassigned.devices/samba_mount.cfg already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /boot/config/plugins/unassigned.devices/iso_mount.cfg already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/smb-settings.conf already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/config/smb-extra.conf already exists
    Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/add-smb-extra already exists
    Jun 13 08:56:12 Tower root: plugin: setting: /tmp/unassigned.devices/add-smb-extra - mode to 0770
    Jun 13 08:56:12 Tower root: plugin: skipping: /tmp/unassigned.devices/remove-smb-extra already exists
    Jun 13 08:56:12 Tower root: plugin: setting: /tmp/unassigned.devices/remove-smb-extra - mode to 0770
    Jun 13 08:56:12 Tower root: plugin: running: anonymous
    Jun 13 09:00:45 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    Jun 13 10:00:16 Tower crond[2908]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
    </pre>
    </body>
    </html>

     

     

  16. Also, if it makes a difference, everything is up to date....6.9.2 with all containers and plugins updated.  I'm pretty sure this is going to keep getting higher and higher over a couple day period. I'm pretty sure this happened a couple weeks ago and eventually made my server crash (well it crashed and I never found out why and it was running fine when it booted back up, and I just *think* it must have been because of this).

     

    I've looked through the log but I don't see anything jumping out at me....but I also don't know enough about what all is going on behind the scenes to know what everything means. I'm not a IT/Computer guy.....We have a children's clothing store and I do construction  lol. To be honest I don't even know how I got all this working and being useful except that the forum community is awesome.

     

     

  17. Hey guys, on the Dashboard under Memory then Docker, its a 71% and growing, its normally a very low number. I attached a diagnostic, can anyone tell what's causing it? I also have a notification of "Warning [TOWER] - Docker image disk utilization of 71%".  When I look under the Docker tab and advanced view, none of the containers are using a lot of memory.

     

    Thanks...

    tower-diagnostics-20210613-0439.zip

  18. On 4/28/2021 at 2:18 PM, ijuarez said:

     

    I did not get notified of your response, I will try that thank you.

     

    Were you ever able to get this resolved? I seem to be having a similar issue after updating. I cant find the server from the other devices in the Workgroup. Wondering what fixed it for you. Similarly my settings have "yes" for Workgroup.

  19. 15 hours ago, Steace said:

    For those who have dual Intel Xeon E5 2650-3 (or maybe others Xeon CPU too) I remove the first 6 cores and their threads and now I have 5000-5500H/s instead of ~3500H/s or less on anything else I tried:

    What I tried = 1 more core/threads plus or minus, all of them, only the first core/thread removed like the OP said, and more.

    p.s. I also use the OP script and this as additional arguments:

    
    --no-color --asm intel --randomx-1gb-pages

     

    If that can help anyone 😎

     

    Thanks for putting this docker image in CA 😃, it's really appreciated.

    I saw you tell people that it's not worth it to put GPU support for it, but ethereum have gone beyond the 4gb mark,lots of people still have 4gb GPU it may be nice to have the possibility to mine monero since we can't do ethereum anymore (maybe with a hack, but probably not recommended). It may not be that profitable, but hey, were doing that for fun mostly.

    The decision is yours to take, that was just my 2 cents ;) 

     

    I have a xeon e5-2690v3 and had --asm intel and --randomx-1gb-pages but I could never seem to get the 1gb pages to work. When I removed those two arguements my H/s actually went UP about 200 H/s. I have no idea why.

     

    Have you done before/after to see what difference they are making? Also, can you tell in your container log that your actually getting 1gb pages?

    • Like 1
  20. 8 hours ago, G Speed said:

     

    * ABOUT XMRig/6.10.0 gcc/9.3.0
    * LIBS libuv/1.34.2 OpenSSL/1.1.1f hwloc/2.1.0
    * HUGE PAGES supported
    * 1GB PAGES supported
    * CPU Intel(R) Core(TM) i3-8100 CPU @ 3.60GHz (1) 64-bit AES
    L2:1.0 MB L3:6.0 MB 4C/4T NUMA:1
    * MEMORY 11.2/15.5 GB (72%)

     

    Without XMrig running, i'm only at 15% ram utilization
    So yeah.. something wrong

     

     

    My XMrig log was also showing real high ram usage....like 26GB out of 32GB but it was related to me having Plex transcoding in ram and I had a large chunk of my ram allocated for that purpose so XMrig was seeing that and considering it as being used eventhough my actual usage was only like 7GB. 

     

    HOWEVER, that said, even after I made some changes and XMrig only sees my usage as like 7GB out of 32GB, I still cant use the 1GB pages. For some reason RandomX 1GB pages fails. Not sure why.

     

    Anyway, are you doing Plex transcoding in ram? If so, pull up the script, click edit, see how much ram you have allocated to it.....I had way more than I needed allocated so I changed it to like 4GB then restarted my array so it would change (just running the script didnt seem to work but stopping then starting the array did change how much was allocated). 

    • Like 1
×
×
  • Create New...