[Support] Machinaris - Chia cryptocurrency farming + Plotman plotting + Unraid WebUI


Recommended Posts

12 minutes ago, guy.davis said:

 

Sorry to hear that you're experiencing syncing problems.  It should be possible to restart the chia services from within the container without impacting concurrent plotting jobs.  Try this:



docker exec -it machinaris bash

chia start -r

 

I gave that a try, seems to have to select which process to restart, I just selected all since I don't have plots running:

 

chia start all -r

 

I will give it a bit and see if it proceeds any further.

 

The ubuntu system is updating, slowly but moving, up to the 24th at this point, was on the 23rd when I started it.

Edited by TexasUnraid
  • Like 1
Link to comment
45 minutes ago, guy.davis said:

 

Sorry to hear that you're experiencing syncing problems.  It should be possible to restart the chia services from within the container without impacting concurrent plotting jobs.  Try this from the Docker console for Machinaris under Unraid (so an in-container command):


chia start farmer -r

 

Thanks for the tip and I restarted all services and now it seems to be syncing again.

 

Another bonus is before it wasn't updating the plot count in the summary page while my gaming rig was plotting them to the share that holds my plots. After the restart it's added the new plots to the overall plot count on the summary page.

  • Like 1
Link to comment

Odd, still not syncing on mine. I have valid connections it seems but no traffic on them oddly.

 

Connections:
Type      IP                                     Ports       NodeID      Last Connect      MiB Up|Dwn
TIMELORD  127.0.0.1                              53788/8446  887b357d... May 28 11:47:36      0.2|0.0    
FARMER    127.0.0.1                              53804/8447  6c068ee5... May 28 11:47:39      0.0|0.0    
FULL_NODE 180.183.221.89                          8444/8444  caf364d4... May 28 12:27:11      0.1|30.9   
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 190.2.141.199                           8444/8444  ed389405... May 28 12:27:10      0.1|84.5   
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 86.23.181.251                           8444/8444  fed2b561... May 28 12:27:10      0.1|69.5   
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 135.23.48.145                           8444/8444  44737016... May 28 12:27:12      0.1|0.7    
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 35.247.157.234                          8444/8444  206db136... May 28 12:27:11      0.0|28.6   
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 46.98.19.214                            8444/8444  5fdd6d0d... May 28 12:25:19      0.0|0.0    
                                                 -SB Height:   336841    -Hash: 0bafb8eb...
WALLET    127.0.0.1                              33170/8449  7667a71e... May 28 12:24:35      0.0|0.0    
FULL_NODE 2.51.11.226                             8444/8444  79309208... May 28 12:27:11      0.0|0.1    
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 95.165.156.202                          8444/8444  026af06b... May 28 12:27:11      0.0|0.0    
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...

 

 

Ubuntu is still moving along fine, up to the 25th now.

Link to comment
12 minutes ago, TexasUnraid said:

Odd, still not syncing on mine. I have valid connections it seems but no traffic on them oddly.

 



Connections:
Type      IP                                     Ports       NodeID      Last Connect      MiB Up|Dwn
TIMELORD  127.0.0.1                              53788/8446  887b357d... May 28 11:47:36      0.2|0.0    
FARMER    127.0.0.1                              53804/8447  6c068ee5... May 28 11:47:39      0.0|0.0    
FULL_NODE 180.183.221.89                          8444/8444  caf364d4... May 28 12:27:11      0.1|30.9   
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 190.2.141.199                           8444/8444  ed389405... May 28 12:27:10      0.1|84.5   
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 86.23.181.251                           8444/8444  fed2b561... May 28 12:27:10      0.1|69.5   
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 135.23.48.145                           8444/8444  44737016... May 28 12:27:12      0.1|0.7    
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 35.247.157.234                          8444/8444  206db136... May 28 12:27:11      0.0|28.6   
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 46.98.19.214                            8444/8444  5fdd6d0d... May 28 12:25:19      0.0|0.0    
                                                 -SB Height:   336841    -Hash: 0bafb8eb...
WALLET    127.0.0.1                              33170/8449  7667a71e... May 28 12:24:35      0.0|0.0    
FULL_NODE 2.51.11.226                             8444/8444  79309208... May 28 12:27:11      0.0|0.1    
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...
FULL_NODE 95.165.156.202                          8444/8444  026af06b... May 28 12:27:11      0.0|0.0    
                                                 -SB Height:   350016    -Hash: 6a5c0ad1...

 

 

Ubuntu is still moving along fine, up to the 25th now.

 

Well, that Network | Connections page shows some decent Down speeds, so something is definitely happening.  What does your Network | Blockchain page show? How far back are you on sync... datewise?

 

Mine shows: "Current Blockchain Status: Full Node Synced" with a Peak Time value just a minute or so off current time.

 

Have you tried Network | Connections | Add Connection with one of these?

Edited by guy.davis
Link to comment
2 minutes ago, guy.davis said:

 

Well, that Network | Connections page shows some decent Down speeds, so something is definitely happening.  What does your Network | Blockchain page show? How far back are you on sync... datewise?

 

Mine shows: "Current Blockchain Status: Full Node Synced" with a Peak Time value just a minute or so off current time.

 

I know that is what is strange, the blockchain has been stuck like this for 2 days:

 

Current Blockchain Status: Full Node syncing to block 350059 
Currently synced to block: 272168
Current Blockchain Status: Not Synced. Peak height: 272168
      Time: Wed May 12 2021 20:14:00 CDT                  Height:     272168

Estimated network space: 4.045 EiB
Current difficulty: 274
Current VDF sub_slot_iters: 109051904
Total iterations since the start of the blockchain: 878933278386

 

Yet something is being downloaded and CPU is being used (although not as much as initially).

Link to comment

Well, just a week after v0.2.1 came out, another big step forward for Machinaris is here: v0.3.0  

Huge thanks to all those that ran the test version and logged Github issues.  

Changes:

  • Integrate the excellent Chiadog project for log monitoring and alerting
  • Plotman Analyze output to show time spent in each plotting phase
  • Log Viewer for Farming, Alerts, and Plotting including logs for running plot jobs
  • Rebase off ubuntu:focal, include nice Dockerfile cleanup by sparklyballs 
  • When mode=plotter, autoconfigure Plotman with provided farmer_pk and pool_pk
  • When mode=harvester, auto import of your farmer's CA certificates

Get your update by pulling docker image: `ghcr.io/guydavis/machinaris`

  • Like 2
  • Thanks 2
Link to comment

Ok, so I copied the "mainnet" folder from the ubuntu system and put it in the appdata folder (after making a backup).

 

I started up the container and the wallet is synced!

 

But now I get this error in the network section:

 

Connection error. Check if full node rpc is running at 8555
This is normal if full node is still starting up

 

Guessing I overwrote some files I should not of. Any idea which ones they were?

Link to comment

Can anyone point me in the right direction? I'm plotting in RAM Disk as I have 384GB of RAM in my server - Something is going wrong at completion of the plot causing it to never get transferred to the plot destination. Possibly a permissions issue? I have tried running container privileged, tried chmod on the plotting and plot directories... Any ideas?

Link to comment
51 minutes ago, the0nlyace said:

Can anyone point me in the right direction? I'm plotting in RAM Disk as I have 384GB of RAM in my server - Something is going wrong at completion of the plot causing it to never get transferred to the plot destination. Possibly a permissions issue? I have tried running container privileged, tried chmod on the plotting and plot directories... Any ideas?


Hi, sorry to hear you're encountering this.  Please share your plotman.yaml (in particular your directories:tmp and directories:dst listings).  Also, please share a view of your Docker settings such as this example for plot folders and the 'plots_dir' variable:

20210528_182927.thumb.png.2cc6a7804bbe673f08c5c4835f0f04d9.png

Perhaps a path is misconfigured?

Edited by guy.davis
Link to comment
3 hours ago, TexasUnraid said:

Ok, so I copied the "mainnet" folder from the ubuntu system and put it in the appdata folder (after making a backup).

 

I started up the container and the wallet is synced!

 

But now I get this error in the network section:

 


Connection error. Check if full node rpc is running at 8555
This is normal if full node is still starting up

 

Guessing I overwrote some files I should not of. Any idea which ones they were?

 

That's odd.  Well, if the error persists beyond a restart, I would suggest reviewing the Chia logs found in /mnt/user/appdata/machinaris/mainnet/log/.  They may have a clue to the issue.  Let us know what you find.

Link to comment

Is there anyway for the main summary page to show the hash challenges for harvesters connecting to the node like the normal GUI does. As I glance through the log I am guessing the answer is no, but it would be awesome if showed the plots passed status of the connecting harvesters.

 

I think I will dabble with having multiple dockers running so that Chia dog can point at each of my harvesters for reporting. That plus pushover notifications or something should help things out.  Love the new release!

 

Update: I have 2 dockers running, with the log file overwriting from a remote harvester, but at a glance it looks like this band-aid doesn't work for Chiadog. I don't think Chia dog is expecting to see the log of a harvester as it doesn't pull out the number of plots processing time info like it does on a Full node log.

 

Seems like the best solution for monitoring remote harvester logs will be to get the Chiadog developer to add this officially as a feature.

Edited by jaj08
  • Like 1
Link to comment

So I see Chiadog's official recommendation for monitoring multiple harvesters is to spin up multiple Chiadog instances, similar to what you suggested with multiple dockers.  

 

https://github.com/martomi/chiadog/wiki/Monitoring-Multiple-Harvesters

 

Multiple machinaris dockers though spins up by default a lot more services than necessary unless of course someone simply comes up with a Chiadog dedicated docker for unraid.  So I am wondering, any chance you can add support and a variable for multiple chiadog instances?

 

Perhaps a variable where we can specify how many instances we require. And then from your primary Interface you would then have Alert1 Alert2 Alert3 sections in the GUI with different configuration files for each instance number.  Ideally Alert# can be named something custom so we can label each server being monitored.  My home setup works perfect with machinaris, but my office setup has a total of 8 or so remote harvesters split between two locations.

 

Link to comment
Just had to reboot unraid…. After the reboot farming is unavailable and an error in connection…

Been like that for an hour, can that still be a startup thing that will solve itself or is something else wrong ?


Verzonden vanaf mijn iPhone met Tapatalk

Solved it… unrelated issue… cache drive was full..


Verzonden vanaf mijn iPhone met Tapatalk
  • Like 1
Link to comment
1 hour ago, jaj08 said:

So I see Chiadog's official recommendation for monitoring multiple harvesters is to spin up multiple Chiadog instances, similar to what you suggested with multiple dockers.  

 

https://github.com/martomi/chiadog/wiki/Monitoring-Multiple-Harvesters

 

Multiple machinaris dockers though spins up by default a lot more services than necessary unless of course someone simply comes up with a Chiadog dedicated docker for unraid.  So I am wondering, any chance you can add support and a variable for multiple chiadog instances?

 

Perhaps a variable where we can specify how many instances we require. And then from your primary Interface you would then have Alert1 Alert2 Alert3 sections in the GUI with different configuration files for each instance number.  Ideally Alert# can be named something custom so we can label each server being monitored.  My home setup works perfect with machinaris, but my office setup has a total of 8 or so remote harvesters split between two locations.

 

 

Lots of good ideas here.  First, and possibly quickest fix for you, is to use this pure Chiadog container.  

 

With respect to Machinaris, you can start an instance in harvester-only mode, which will run chiadog and monitor the chia log file.

 

Currently a lan with multiple instances of Machinaris (a fullnode, harvester, and couple plotters) don't talk to each other.  You can monitor them with separate browser tabs.  I'm looking to improve this with distributed worker support and single monitoring pane on the fullnode.  Please add your thoughts to this open Issue on how it might look.

Link to comment

first of all, I'm very unsteady on my feet in docker and a total newb to unraid so apologies in advance....

 

I want to increase plotting speed because I have multiple cores and RAM free and 2 x 1TB NVME drives to plot to.

 

I'm not clear exactly if I edit settings in the plotman.yaml area of the container from the GUI or if I have to stop the container (and therefore kill active plots) to change things.

 

And is there a guide on how to "stagger" plots - I've heard this is best practice.  I want to make use of the multiple cores and drives I have sitting idle currently.

 

Thank you very much and this is a wonderful tool!

Edited by Ystebad
autocorrect spelling sucks
Link to comment
2 hours ago, Ystebad said:

first of all, I'm very unsteady on my feet in docker and a total newb to unraid so apologies in advance....

 

I want to increase plotting speed because I have multiple cores and RAM free and 2 x 1TB NVME drives to plot to.

 

I'm not clear exactly if I edit settings in the plotman.yaml area of the container from the GUI or if I have to stop the container (and therefore kill active plots) to change things.

 

And is there a guide on how to "stagger" plots - I've heard this is best practice.  I want to make use of the multiple cores and drives I have sitting idle currently.

 

Thank you very much and this is a wonderful tool!

 

Thanks!  Nice setup, let's get your Plotman tuned up.  First off, I assume you are using the two 1 TB plotting drives separately.  Follow these directions to get two separate plotting mounts in the Machinaris container. (If they are instead a RAID-0 brtfs cache pool, then just mount a single plotting drive.)

 

Once you have your SSDs mounted in the Machinaris container as /plotting1 and /plotting2 (or similar), then let's go edit the plotman.yaml.  Click on the 'Settings | Plotting' page and edit the plotman.yaml there.  Here are your changes:

  1. Under 'directories:', find 'tmp:' and change the single '/plotting' line to two separate lines of ' - /plotting1' and '- /plotting2'.
  2. Check that your 'directories:', 'dst:' line (defaults to /plots)  is going to your final destination folder.  /plots in-container should be a volume mount to your Unraid host's target plots directory.  If you have more than one final plots directory, follow these instructions first.
  3. Next, let's tune the plotting settings.  Scroll-down to 'scheduling:' and set 'tmpdir_max_jobs' to 4.  This tells Plotman it can start up 4 concurrent plot jobs (staggered start times) on each 1 TB SSD.
  4. Since you have 2 plot dirs, change 'global_max_jobs:' to 8 which is total concurrent plot jobs.
  5. Then hit Save button to validate the changes.  You DON'T have to restart the container, so any running plot jobs should continue to run. You DO need to restart only the Plotman process to pickup the changes.  Do this from the Plotting page.  First click 'Stop Plotman', wait for confirmation, then 'Plot On!' to start it back up. 

Your existing plots should keep running, but now Plotman will take into account these settings and probably launch new jobs right away, or  shortly.  You can tune further if needed. Hope this helps!  Happy plotting!

 

EDIT: You may want to tweak these settings for even more concurrency:

  • tmpdir_stagger_phase_major: 2
  • tmpdir_stagger_phase_minor: 1
  • tmpdir_stagger_phase_limit: 1

 

Basically above is saying only start one (tmpdir_stagger_phase_limit) job per temp directory until the currently running job clears phase 2:1.    Experiment with these, but understand that too many concurrent (5+) per drive might result in it filling up and blocking the plot jobs.

Edited by guy.davis
additional tips
Link to comment

@guy.davis Thanks so much.

 

I clicked "edit" on the docker/machinaris container to add  /mnt/user/chiaplotting2 share to container path /plotting2.   Unfortunately when I hit save it restarted the docker so I lost my active plots.  I must have done something wrong or misunderstood how to add the host shares to the docker container.

 

in the plotman.yaml I added:

 

       tmp:
                - /plotting
                - /plotting2
 

Also:

 

# Don't run more than this many jobs at a time on a single temp dir.
        tmpdir_max_jobs: 4

        # Don't run more than this many jobs at a time in total.
        global_max_jobs: 8

 

stopped and restarted plotman.  Only have 1 plot listed.  Doesn't seem to have increased the number ongoing - I understand it may wait a bit on the primary plotting drive but since I have 2 it should start 1 on each drive first and then add the second later, right?

 

If I've got 1 TB plotting, can I run 3 on each drive concurently?  Plenty of cores and memory left.

 

Edit:

 

addendum: I also noticed a setting for Upnp = true in the yaml file.  I don't run upnp for security reasons but have forwarded port 8444 on my router to my unraid machine.  Do I need to change this setting?

 

Edited by Ystebad
Link to comment
1 hour ago, Ystebad said:

@guy.davis Thanks so much.

 

I clicked "edit" on the docker/machinaris container to add  /mnt/user/chiaplotting2 share to container path /plotting2.   Unfortunately when I hit save it restarted the docker so I lost my active plots.  I must have done something wrong or misunderstood how to add the host shares to the docker container.

 

Sorry, I should have been more clear that a container restart means all plotting is killed. That said, you want to get your volume mounts correct as first step, so it's good you have this working now.

 

Quote

in the plotman.yaml I added:

 

       tmp:
                - /plotting
                - /plotting2
 

Also:

 

# Don't run more than this many jobs at a time on a single temp dir.
        tmpdir_max_jobs: 4

        # Don't run more than this many jobs at a time in total.
        global_max_jobs: 8

 

stopped and restarted plotman.  Only have 1 plot listed.  Doesn't seem to have increased the number ongoing - I understand it may wait a bit on the primary plotting drive but since I have 2 it should start 1 on each drive first and then add the second later, right?

 

If I've got 1 TB plotting, can I run 3 on each drive concurently?  Plenty of cores and memory left.

 

Yes, with the settings outlined, as soon as the first plot reaches phase 2:1, the next will be started, up to a max of 4 at any time.  If you would prefer more jobs to be running in the same major phase the same time, up to max of 4, then increase "tmpdir_stagger_phase_limit" from 1 to 2, 3, or 4.  Just Save and restart Plotman (not the entire container) to pick up this change.  You'll need to experiment to find the best settings for you.

 

Quote

Edit:

 

addendum: I also noticed a setting for Upnp = true in the yaml file.  I don't run upnp for security reasons but have forwarded port 8444 on my router to my unraid machine.  Do I need to change this setting?

 

 

Yes, you can disable upnp in the Chia config.yaml. It will be picked up on next restart of the container, though you might get away with executing `chia start all -r` in-container (warning: haven't tried this myself).  I don't think is critical enough to rush a container restart for, just next time you do one.

 

Hope this helps! Sounds like you're getting closer to an optimum profile for your Unraid server.

Edited by guy.davis
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.