[Support] Machinaris - Chia cryptocurrency farming + Plotman plotting + Unraid WebUI


Recommended Posts

8 minutes ago, Einsteinjr said:

Definitely does not match mine.  My cryptodoge container is taking 3Gb on its own.

Reboot that container, see if it changes.

Ive rebooted a couple of mine and their memory usage has dropped

 

I rebooted cactus and it went from 2.548GiB to 1.615GiB once it completed the reboot and what not associated with said reboot.

Now its sitting around 1.615 and its been a while.

I rebooted flax and it went from 2.721GiB to 1.992GiB 

Maize, from 2.354GiB to 1.635GiB 
btcgreen, from 2.46GiB to 1.495GiB 

And then i rebooted machinaris and it went from 3.994GiB to 2.67GiB 

 

All of these were rebooted and given ample time to catch up and calm down again.

There is an obvious processor utilization and rebuild sequence that you can see happen in regards to utilization before it settles back into what is essentially normal processor usage for each container.

 

Hey @guy.davis, any idea whats going on here?

I'm not saying something is broke, i dont know enough to know.

But it seems something might be worth a looksee by your excellence. ;-)

Something seem to be munching on memory.
Do you work for a memory manufacture? is that the gig here with all this excellent work your doing? haha

 

Let me know if there is anything i can do to help.

I can hope on the discord if needed for more communication from me on helping diagnose.

 

In the mean time, im rebooting the rest of the containers also.

 

Link to comment
2 minutes ago, _cr8tor_ said:

Reboot that container, see if it changes.

Ive rebooted a couple of mine and their memory usage has dropped

 

I rebooted cactus and it went from 2.548GiB to 1.615GiB once it completed the reboot and what not associated with said reboot.

Now its sitting around 1.615 and its been a while.

I rebooted flax and it went from 2.721GiB to 1.992GiB 

Maize, from 2.354GiB to 1.635GiB 
btcgreen, from 2.46GiB to 1.495GiB 

And then i rebooted machinaris and it went from 3.994GiB to 2.67GiB 

 

All of these were rebooted and given ample time to catch up and calm down again.

There is an obvious processor utilization and rebuild sequence that you can see happen in regards to utilization before it settles back into what is essentially normal processor usage for each container.

 

Hey @guy.davis, any idea whats going on here?

I'm not saying something is broke, i dont know enough to know.

But it seems something might be worth a looksee by your excellence. ;-)

Something seem to be munching on memory.
Do you work for a memory manufacture? is that the gig here with all this excellent work your doing? haha

 

Let me know if there is anything i can do to help.

I can hope on the discord if needed for more communication from me on helping diagnose.

 

In the mean time, im rebooting the rest of the containers also.

 

Yup - if I reboot the container, memory usage will drop significantly and then will gradually creep up.  Most of my machinaris containers are using >3GB now after running for ~18 hours.  Now that I have a buffer to work with, I'm going to leave them on to see if they will continue to use up whatever is available.

 

Having issues with Telegraf currently so I'm not able to get the nice metrics I usually can get.

Link to comment
2 hours ago, _cr8tor_ said:

Hey @guy.davis, any idea whats going on here?

Something seem to be munching on memory.

 

It's possible that some are facing issues with the addition of ForkTools and the multi-proc patch that is applied automatically on container start.  I haven't seen this but everyone has different hardware/configurations.  

In `:develop`, you can pass environment variable "forktools_skip_build" with value "true" to skip this patching.  I need to verify this, before it gets pushed to `:test` stream in a bit.  Let's see if that helps you all.

Link to comment
5 minutes ago, guy.davis said:

In `:develop`, you can pass environment variable "forktools_skip_build" with value "true" to skip this patching.  I need to verify this, before it gets pushed to `:test` stream in a bit.  Let's see if that helps you all.

 

Im still a bit of a newb when it comes to unraid and all this. 

 

To clarify, i should change each of my containers (machinaris and each coin) from :latest to :develop and then also for each container manually add the environment variable "forktools_skip_build" with value "true".

Is that correct?

Or just on the main machinaris container should i perform those steps?

 

And for what its worth im running with an i7-6600k on a Z170 chipset and currently 32gb (4x8gb) of missmatched ram (being replaced by 64gb (4x16gb) of matched)

Edited by _cr8tor_
Link to comment
2 hours ago, _cr8tor_ said:

Also, what is forktools? I cant find it as an app. 

Is this something i should be using? haha

Can anyone give me a quick explainer and/or point me to info on it? I seem to be struggling for the right search terms.

Its all dinner forks or motorcycle tools. 🙂

Look at his latest release.  He implemented forktools directly into the container.

Link to comment
2 hours ago, guy.davis said:

 

It's possible that some are facing issues with the addition of ForkTools and the multi-proc patch that is applied automatically on container start.  I haven't seen this but everyone has different hardware/configurations.  

In `:develop`, you can pass environment variable "forktools_skip_build" with value "true" to skip this patching.  I need to verify this, before it gets pushed to `:test` stream in a bit.  Let's see if that helps you all.

It seems that the timeline aligns with that patch.

 

I currently don't limit the containers' cores so every container has all 12 cores assigned. Maybe that exacerbates the problem?

Link to comment
3 hours ago, guy.davis said:

 

It's possible that some are facing issues with the addition of ForkTools and the multi-proc patch that is applied automatically on container start.  I haven't seen this but everyone has different hardware/configurations.  

In `:develop`, you can pass environment variable "forktools_skip_build" with value "true" to skip this patching.  I need to verify this, before it gets pushed to `:test` stream in a bit.  Let's see if that helps you all.

trying this out now.  will let you know if it helps with the RAM usage

Link to comment

Images are from an Intel i7-8700.

 

868545845_Screenshot2021-12-09223919.thumb.png.3e75abbd29027c4546fbfdefdb23aaf8.png

 

CPU usage spikes to around 10% by each fork.   The bigger the blockchain, the more CPU it takes to farm.

 

268913215_Screenshot2021-12-09225917.thumb.png.f238dccea16c54d2e1ebb7adde498f19.png

Seems the memory usage is a bit more stable.  Last container restart was 3 hours ago.  22GB of memory for all 10 containers.

 

UPDATE: 16 hours after restarting the containers, I'm now up to 27.5GB used for the same 10 Machinaris containers.

Edited by Einsteinjr
Link to comment
16 hours ago, Einsteinjr said:

UPDATE: 16 hours after restarting the containers, I'm now up to 27.5GB used for the same 10 Machinaris containers.

 

Thanks for the update.  Here's a view of my farming (no longer syncing) forks, using `:develop` images including latest Chives codebase:

 

image.png.2c0404c33e55222a4519a7cc94b1bb1d.png

 

Interestingly, both Chia and Chives, show higher memory usage than I've seen recently.  These are the two unique  containers that go through code which lists the plots and gathers details about them for display in the WebUI.  This makes me wonder if something is leaking memory in that code path.  I will investigate further.  Keep me appraised of any further findings and thanks for the tip!

 

  • Thanks 1
Link to comment
1 hour ago, ahowlett said:

unRAID, Machinaris, not plotting any more.

 

The log says "error: stat of /root/.chia/farmr/log*txt failed: No such file or directory"
touch /root/.chia/farmr/log.txt
temporarily got it running again, but it's recurred.

Any advice please?

How did you get it working again?

The error seems like it may be pointing to a hardware error, possibly?

Need more info, largely, what did you do to get it working again.

Link to comment
4 hours ago, ahowlett said:

unRAID, Machinaris, not plotting any more.

 

The log says "error: stat of /root/.chia/farmr/log*txt failed: No such file or directory"
touch /root/.chia/farmr/log.txt
temporarily got it running again, but it's recurred.

Any advice please?

 

Hi, I'm not sure these two symptoms are related:

1) Farmr log rotation outputs a spurious warning to stdout so is seen in Machinaris - Unraid Logs.  Already fixed in :test release.

2) "not plotting anymore":  Please ensure you are updated to latest Machinaris images, then troubleshoot plotting.  
If still having an issue, please supply all relevant logs.  Thanks!

Edited by guy.davis
Link to comment

Hi guy.davis!

 

Plotman seemed to make the Docker container go full (the docker service now fails to start - requiring a docker image deletion and container re-adding) when it tried to send a completed plot to a destination (defined under locations in dst) that is already too full.

 

Is there any way to prevent this, other than 1) un-selecting full drives from the dst list, or, 2) archiving? I think another incident is bound to happen, but this feels easy enough to occur that I don't think I'm the first to experience it...

 

(edit - corrected my own drunk 5am writing)

Edited by Shunz
Link to comment
21 hours ago, Shunz said:

Plotman seemed to make the Docker container go full (the docker service now fails to start - requiring a docker image deletion and container re-adding) when it tried to send a completed plot to a destination (defined under locations in dst) that is already too full.

 

Is there any way to prevent this, other than 1) un-selecting full drives from the dst list, or, 2) archiving? I think another incident is bound to happen, but this feels easy enough to occur that I don't think I'm the first to experience it...

 

Hi, sorry to hear that.  No, I haven't seen reports of the Docker image space becoming full due to plotting unless the "tmp", "dst", or optionally the "tmp2" locations were not actually defined as host mount points in the Unraid docker configuration.  As long as Plotman is writing to volume-mounted locations, the Docker image space should not be affected.  I recommend checking the Volumes defined in your Machinaris container on the Unraid UI's Docker tab.  Hope this helps.

Link to comment

I get an error "OSError: [Errno 98] error while attempting to bind on address ('::', 8444, 0, 0): address already in use"

this results in my chia farm not starting properly and can't sync at the start. I have no idea where to start here and I've been trying to trim my config down and get it as close as possible to stock but it's still throwing this error.

Link to comment
6 hours ago, ehcorn said:

I get an error "OSError: [Errno 98] error while attempting to bind on address ('::', 8444, 0, 0): address already in use"

this results in my chia farm not starting properly and can't sync at the start. I have no idea where to start here and I've been trying to trim my config down and get it as close as possible to stock but it's still throwing this error.

 

 

Hi, this could a weird side-effect of the latest Chia dust storm.  Or perhaps something stuck on update.  Either way, I would recommend using the `:test` images by updating.  If that doesn't help, restart the Unraid system to force the port to be freed up.  Hope this helps!

Link to comment

Is there an issue with Maize coin farming? I've been farming it since it was added in Machinaris, and the GUI tells me Expected time to win is around a day, yet after weeks, I have only won 2.5 maize coins. I farm all the coins, and the rest are behaving as expected...

 

Is this normal, and the time to win is just out of whack?

Link to comment
15 minutes ago, DoeBoye said:

Is this normal, and the time to win is just out of whack?

It sounds like Maize is just not your luck so far.

I have 360 plots and have farmed 20 maize since the docker was released. 

HDDCoin is one where i seemed to have a lot of luck at first, and now its slow. 

Sounds to me like your on the wrong side of luck on that one for the time.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.