_cr8tor_ Posted December 9, 2021 Share Posted December 9, 2021 8 minutes ago, Einsteinjr said: Definitely does not match mine. My cryptodoge container is taking 3Gb on its own. Reboot that container, see if it changes. Ive rebooted a couple of mine and their memory usage has dropped I rebooted cactus and it went from 2.548GiB to 1.615GiB once it completed the reboot and what not associated with said reboot. Now its sitting around 1.615 and its been a while. I rebooted flax and it went from 2.721GiB to 1.992GiB Maize, from 2.354GiB to 1.635GiB btcgreen, from 2.46GiB to 1.495GiB And then i rebooted machinaris and it went from 3.994GiB to 2.67GiB All of these were rebooted and given ample time to catch up and calm down again. There is an obvious processor utilization and rebuild sequence that you can see happen in regards to utilization before it settles back into what is essentially normal processor usage for each container. Hey @guy.davis, any idea whats going on here? I'm not saying something is broke, i dont know enough to know. But it seems something might be worth a looksee by your excellence. Something seem to be munching on memory. Do you work for a memory manufacture? is that the gig here with all this excellent work your doing? haha Let me know if there is anything i can do to help. I can hope on the discord if needed for more communication from me on helping diagnose. In the mean time, im rebooting the rest of the containers also. Quote Link to comment
Einsteinjr Posted December 9, 2021 Share Posted December 9, 2021 2 minutes ago, _cr8tor_ said: Reboot that container, see if it changes. Ive rebooted a couple of mine and their memory usage has dropped I rebooted cactus and it went from 2.548GiB to 1.615GiB once it completed the reboot and what not associated with said reboot. Now its sitting around 1.615 and its been a while. I rebooted flax and it went from 2.721GiB to 1.992GiB Maize, from 2.354GiB to 1.635GiB btcgreen, from 2.46GiB to 1.495GiB And then i rebooted machinaris and it went from 3.994GiB to 2.67GiB All of these were rebooted and given ample time to catch up and calm down again. There is an obvious processor utilization and rebuild sequence that you can see happen in regards to utilization before it settles back into what is essentially normal processor usage for each container. Hey @guy.davis, any idea whats going on here? I'm not saying something is broke, i dont know enough to know. But it seems something might be worth a looksee by your excellence. Something seem to be munching on memory. Do you work for a memory manufacture? is that the gig here with all this excellent work your doing? haha Let me know if there is anything i can do to help. I can hope on the discord if needed for more communication from me on helping diagnose. In the mean time, im rebooting the rest of the containers also. Yup - if I reboot the container, memory usage will drop significantly and then will gradually creep up. Most of my machinaris containers are using >3GB now after running for ~18 hours. Now that I have a buffer to work with, I'm going to leave them on to see if they will continue to use up whatever is available. Having issues with Telegraf currently so I'm not able to get the nice metrics I usually can get. Quote Link to comment
guy.davis Posted December 9, 2021 Author Share Posted December 9, 2021 17 hours ago, Einsteinjr said: Feature suggestion: Tracking the growth of each currency over time through some time-based database like influxdb. Good idea! I've added it to the GH issues list. 1 Quote Link to comment
guy.davis Posted December 9, 2021 Author Share Posted December 9, 2021 2 hours ago, _cr8tor_ said: Hey @guy.davis, any idea whats going on here? Something seem to be munching on memory. It's possible that some are facing issues with the addition of ForkTools and the multi-proc patch that is applied automatically on container start. I haven't seen this but everyone has different hardware/configurations. In `:develop`, you can pass environment variable "forktools_skip_build" with value "true" to skip this patching. I need to verify this, before it gets pushed to `:test` stream in a bit. Let's see if that helps you all. Quote Link to comment
_cr8tor_ Posted December 9, 2021 Share Posted December 9, 2021 (edited) 5 minutes ago, guy.davis said: In `:develop`, you can pass environment variable "forktools_skip_build" with value "true" to skip this patching. I need to verify this, before it gets pushed to `:test` stream in a bit. Let's see if that helps you all. Im still a bit of a newb when it comes to unraid and all this. To clarify, i should change each of my containers (machinaris and each coin) from :latest to :develop and then also for each container manually add the environment variable "forktools_skip_build" with value "true". Is that correct? Or just on the main machinaris container should i perform those steps? And for what its worth im running with an i7-6600k on a Z170 chipset and currently 32gb (4x8gb) of missmatched ram (being replaced by 64gb (4x16gb) of matched) Edited December 10, 2021 by _cr8tor_ Quote Link to comment
_cr8tor_ Posted December 10, 2021 Share Posted December 10, 2021 Also, what is forktools? I cant find it as an app. Is this something i should be using? haha Can anyone give me a quick explainer and/or point me to info on it? I seem to be struggling for the right search terms. Its all dinner forks or motorcycle tools. 🙂 Quote Link to comment
Einsteinjr Posted December 10, 2021 Share Posted December 10, 2021 2 hours ago, _cr8tor_ said: Also, what is forktools? I cant find it as an app. Is this something i should be using? haha Can anyone give me a quick explainer and/or point me to info on it? I seem to be struggling for the right search terms. Its all dinner forks or motorcycle tools. 🙂 Look at his latest release. He implemented forktools directly into the container. Quote Link to comment
Einsteinjr Posted December 10, 2021 Share Posted December 10, 2021 2 hours ago, guy.davis said: It's possible that some are facing issues with the addition of ForkTools and the multi-proc patch that is applied automatically on container start. I haven't seen this but everyone has different hardware/configurations. In `:develop`, you can pass environment variable "forktools_skip_build" with value "true" to skip this patching. I need to verify this, before it gets pushed to `:test` stream in a bit. Let's see if that helps you all. It seems that the timeline aligns with that patch. I currently don't limit the containers' cores so every container has all 12 cores assigned. Maybe that exacerbates the problem? Quote Link to comment
_cr8tor_ Posted December 10, 2021 Share Posted December 10, 2021 13 minutes ago, Einsteinjr said: Look at his latest release. He implemented forktools directly into the container. Oh, so its integrated as part of the gui elements? Quote Link to comment
guy.davis Posted December 10, 2021 Author Share Posted December 10, 2021 6 minutes ago, _cr8tor_ said: Oh, so its integrated as part of the gui elements? Sorry, should have explained. No, Forktools are CLI tools available within the container CLI itself. Primary use case is trying to limit memory usage. Quote Link to comment
_cr8tor_ Posted December 10, 2021 Share Posted December 10, 2021 5 minutes ago, guy.davis said: Sorry, should have explained. No, Forktools are CLI tools available within the container CLI itself. Primary use case is trying to limit memory usage. Is this the container cli? 1 Quote Link to comment
Einsteinjr Posted December 10, 2021 Share Posted December 10, 2021 3 hours ago, guy.davis said: It's possible that some are facing issues with the addition of ForkTools and the multi-proc patch that is applied automatically on container start. I haven't seen this but everyone has different hardware/configurations. In `:develop`, you can pass environment variable "forktools_skip_build" with value "true" to skip this patching. I need to verify this, before it gets pushed to `:test` stream in a bit. Let's see if that helps you all. trying this out now. will let you know if it helps with the RAM usage Quote Link to comment
Einsteinjr Posted December 10, 2021 Share Posted December 10, 2021 (edited) Images are from an Intel i7-8700. CPU usage spikes to around 10% by each fork. The bigger the blockchain, the more CPU it takes to farm. Seems the memory usage is a bit more stable. Last container restart was 3 hours ago. 22GB of memory for all 10 containers. UPDATE: 16 hours after restarting the containers, I'm now up to 27.5GB used for the same 10 Machinaris containers. Edited December 10, 2021 by Einsteinjr Quote Link to comment
guy.davis Posted December 10, 2021 Author Share Posted December 10, 2021 16 hours ago, Einsteinjr said: UPDATE: 16 hours after restarting the containers, I'm now up to 27.5GB used for the same 10 Machinaris containers. Thanks for the update. Here's a view of my farming (no longer syncing) forks, using `:develop` images including latest Chives codebase: Interestingly, both Chia and Chives, show higher memory usage than I've seen recently. These are the two unique containers that go through code which lists the plots and gathers details about them for display in the WebUI. This makes me wonder if something is leaking memory in that code path. I will investigate further. Keep me appraised of any further findings and thanks for the tip! 1 Quote Link to comment
ahowlett Posted December 12, 2021 Share Posted December 12, 2021 unRAID, Machinaris, not plotting any more. The log says "error: stat of /root/.chia/farmr/log*txt failed: No such file or directory" touch /root/.chia/farmr/log.txt temporarily got it running again, but it's recurred. Any advice please? Quote Link to comment
_cr8tor_ Posted December 12, 2021 Share Posted December 12, 2021 1 hour ago, ahowlett said: unRAID, Machinaris, not plotting any more. The log says "error: stat of /root/.chia/farmr/log*txt failed: No such file or directory" touch /root/.chia/farmr/log.txt temporarily got it running again, but it's recurred. Any advice please? How did you get it working again? The error seems like it may be pointing to a hardware error, possibly? Need more info, largely, what did you do to get it working again. Quote Link to comment
guy.davis Posted December 12, 2021 Author Share Posted December 12, 2021 (edited) 4 hours ago, ahowlett said: unRAID, Machinaris, not plotting any more. The log says "error: stat of /root/.chia/farmr/log*txt failed: No such file or directory" touch /root/.chia/farmr/log.txt temporarily got it running again, but it's recurred. Any advice please? Hi, I'm not sure these two symptoms are related: 1) Farmr log rotation outputs a spurious warning to stdout so is seen in Machinaris - Unraid Logs. Already fixed in :test release. 2) "not plotting anymore": Please ensure you are updated to latest Machinaris images, then troubleshoot plotting. If still having an issue, please supply all relevant logs. Thanks! Edited December 12, 2021 by guy.davis Quote Link to comment
mdrodge Posted December 16, 2021 Share Posted December 16, 2021 (edited) On 12/10/2021 at 2:52 AM, _cr8tor_ said: Is this the container cli? Yes mate. Don't worry though you won't be expected to use it unless you want to (or maybe if Guy asks you to if you talk to him directly) Edited December 16, 2021 by mdrodge Quote Link to comment
_cr8tor_ Posted December 16, 2021 Share Posted December 16, 2021 45 minutes ago, mdrodge said: Yes mate. Don't worry though you won't be expected to use it unless you want to (or maybe if Guy asks you to if you talk to him directly) All good. It helps to know for sure though! Thanks for the reply. Quote Link to comment
Shunz Posted December 18, 2021 Share Posted December 18, 2021 (edited) Hi guy.davis! Plotman seemed to make the Docker container go full (the docker service now fails to start - requiring a docker image deletion and container re-adding) when it tried to send a completed plot to a destination (defined under locations in dst) that is already too full. Is there any way to prevent this, other than 1) un-selecting full drives from the dst list, or, 2) archiving? I think another incident is bound to happen, but this feels easy enough to occur that I don't think I'm the first to experience it... (edit - corrected my own drunk 5am writing) Edited December 19, 2021 by Shunz Quote Link to comment
guy.davis Posted December 19, 2021 Author Share Posted December 19, 2021 21 hours ago, Shunz said: Plotman seemed to make the Docker container go full (the docker service now fails to start - requiring a docker image deletion and container re-adding) when it tried to send a completed plot to a destination (defined under locations in dst) that is already too full. Is there any way to prevent this, other than 1) un-selecting full drives from the dst list, or, 2) archiving? I think another incident is bound to happen, but this feels easy enough to occur that I don't think I'm the first to experience it... Hi, sorry to hear that. No, I haven't seen reports of the Docker image space becoming full due to plotting unless the "tmp", "dst", or optionally the "tmp2" locations were not actually defined as host mount points in the Unraid docker configuration. As long as Plotman is writing to volume-mounted locations, the Docker image space should not be affected. I recommend checking the Volumes defined in your Machinaris container on the Unraid UI's Docker tab. Hope this helps. Quote Link to comment
ehcorn Posted December 20, 2021 Share Posted December 20, 2021 I get an error "OSError: [Errno 98] error while attempting to bind on address ('::', 8444, 0, 0): address already in use" this results in my chia farm not starting properly and can't sync at the start. I have no idea where to start here and I've been trying to trim my config down and get it as close as possible to stock but it's still throwing this error. Quote Link to comment
guy.davis Posted December 21, 2021 Author Share Posted December 21, 2021 6 hours ago, ehcorn said: I get an error "OSError: [Errno 98] error while attempting to bind on address ('::', 8444, 0, 0): address already in use" this results in my chia farm not starting properly and can't sync at the start. I have no idea where to start here and I've been trying to trim my config down and get it as close as possible to stock but it's still throwing this error. Hi, this could a weird side-effect of the latest Chia dust storm. Or perhaps something stuck on update. Either way, I would recommend using the `:test` images by updating. If that doesn't help, restart the Unraid system to force the port to be freed up. Hope this helps! Quote Link to comment
DoeBoye Posted December 21, 2021 Share Posted December 21, 2021 Is there an issue with Maize coin farming? I've been farming it since it was added in Machinaris, and the GUI tells me Expected time to win is around a day, yet after weeks, I have only won 2.5 maize coins. I farm all the coins, and the rest are behaving as expected... Is this normal, and the time to win is just out of whack? Quote Link to comment
_cr8tor_ Posted December 21, 2021 Share Posted December 21, 2021 15 minutes ago, DoeBoye said: Is this normal, and the time to win is just out of whack? It sounds like Maize is just not your luck so far. I have 360 plots and have farmed 20 maize since the docker was released. HDDCoin is one where i seemed to have a lot of luck at first, and now its slow. Sounds to me like your on the wrong side of luck on that one for the time. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.