[Support] Partition Pixel - Chia


Recommended Posts

On 5/14/2021 at 9:06 PM, zer0ed said:

So I have swar plot manager running, had time to play with it today.

This setup isn't elegant and probably isn't following best practices but it works for me. Please be mindful of this, if you break something, lose plots, etc.. don't come cryin'

 

Thank you so much!!! This is perfect! Exactly what I was looking for. I followed your steps and I have Swar humming along nicely inside this docker!! Now I can actually see what the docker is doing!!

 

As an aside, this is also a great roadmap for me on how to get other scripts running inside other dockers!!!! Super helpful!!!

  • Like 1
Link to comment
10 hours ago, Gnomuz said:

Hi,

Thanks for your interest in my post, I'll try to answer your numerous questions, but I think you already got the general idea of the architecture pretty well 😉

- The RPi doesn't have a M.2 slot out of the box. By default, the OS runs from a SD card. The drawback is that SD cards don't last so long when constantly written, e.g. by the OS and the chia processes writing logs or populating the chain database, they were not designed for this kind of purpose. And you certainly don't want to lose your chia full node because of a failing 10$ SD card, as the patiently created plots are still occupying space, but do not participate in any challenge... So, I decide running the RPi OS from a small SATA SSD was a much more resilient solution, and bought an Argon One M.2 case for the RPi. I had an old 120 GB SATA SSD which used to host the OS of my laptop before I upgraded it to a 512 GB SSD, so I used it. This case is great because it's full-metal and there are thermal pads between the CPU and RAM and the case. So, it's essentially a passive heat dissipation setup. There's a fan if temps go too high, but it never starts, unless you live in a desert I suppose. There are many other enclosures for RPis, but this one is my favorite so far.

 

- The RPi is started with the 'chia start farmer' command, which runs the following processes : chia_daemon, chia_full_node (blockchain syncing), chia_farmer (challenges management and communication with the harvester(s)) and chia_wallet. chia_harvester is also launched locally, but is totally useless, as no plots are stored on a storage accessible by the RPi. To get a view on this kind of distributed setup, have a look at https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines, that was my starting point. You can also use 'chia start farmer-no-wallet', and sync your wallet on another machine, I may do that in the future as I don't like having it on the machine exposed to the internet.

 

- The plotting rig doesn't need any chia service running on it, the plotting process can run offline. You just need to install chia on it, and you don't even need to have your private keys stored on it. You just run 'chia plots create (usual params)  -f <your farmer public key> -p <your pool public key>' , and that's all. The created plots will be farmed by the remote farmer once copied into the destination directory.

 

- I decided to store the plots on the xfs array with one parity drive. I know the general consensus is to store plots on non-protected storage, considering you can replot them. But I hate the idea of losing a bunch of plots. You store them on high-density storage, let's say 12TB drives, which can hold circa 120 plots each. Elite plotting rigs with enterprise-grade temporary SSDs create a plot in 4 hours or less. So recreating 120 plots is circa 500 hours or 20 days. When you see the current netspace growth rate of 15% a week or more, that's a big penalty I think. I you have 10 disks, "wasting" one as a parity drive to protect the other 9 sounds like a reasonable trade-off, provided you have a spare drive around to rebuild the array in case of a faulty drive. To sum up, two extra drives (1 parity + 1 spare) reasonably guarantee the continuity of your farming process and prevent the loss of existing plots, whatever the size of your array is. Of course with a single parity drive, you are not protected against two drives failing together, but as usual it's a trade-off between available size, resiliency and costs, nothing specific to chia ... And the strength of Unraid is you won't lose the plots on the healthy drives, unlike other raid5 solutions.

 

- As for the container, it runs only the harvester process ('chia start harvester'), which must be setup as per the link above, nothing difficult. From the container console, you can also optionally run a plotting process, if your Unraid server has a temporary unassigned SSD available (you can also use your cache SSD, but beware of space ...). You will run it just like on your plotting rig : 'chia plots create (relevant params) -f <farmer key> -p <pool key>'. The advantage is that the final copy from the temp dir to the dest dir is much quicker, as it's a local copy on the server from an attached SSD to the Unraid share (10 mins copy vs 20/30 mins over the network for me).

 

- So yes, you can imagine running your plotting process from a container on the Unraid server if you don't have a separate plotting rig. But then I wouldn't use this early container, and would rather wait for a more mature one which would integrate a plotting manager (plotman or Swar), because managing all that manually is a nightmare on the long run, unless you are a script maestro and have a lot of time to spend on it 😉

 

Happy farming !

Thank you so much, for the detailed reply :)
I ordered the pi4 with your recommended case :D I hope I get it installed easily ^^
May I ask if it would be okay to contact you, in case of questions? ^^"
Generally it should work, I#ll use the sd card to install linux on the ssd and try to get it running - I hope I will be able to move my keys and everything from the docker container to the pi and it will still work. Not sure, if the plots are kind of bound to my keys or if I could just get new ones, since I am in the 0 XCH club anyway :D Right know, it seems, my node is not getting all my plots right, maybe I broke something by changing the clock to my time zone, I still have some plots running, so I cant rebuild the container to get the default time back. In the worst case I have to delete 16 plots, if they are broken due to the wrong clock...

I hope the new setup will then just work, right now it's pretty frustrating. With every day it doesn't work, it will be less likely to ever win a coin :|

I can't add parity, since my system is full of HDDs, I am limited to 4x3,5" and thus I don't want to use a quarter of my available space for parity. It's risky, but I just hope the NAS HDDs are just working fine.


One thing I still didn't understand is the plotting with the -f <farmer key> - What key is that? The one the full node prints out when calling chia keys show? Or the one from the harvester container I'll be plotting at?


 

Link to comment
Posted (edited)
46 minutes ago, abra8c said:

My time is also delayed by exactly 2h but it says everything is synced.
Is everything ok when it is synced or should I worry about the 2h?

If everything is in sync, don't worry. For me fixing the time caused a lot of trouble. I still need to confirm it, but there is a change I lost 2 days of plotting due to chaning the clock, because right now the node is not seeing any of the plots I created since I changed the clock, so better don't do it, especially if everything works!

Edited by Trinity
Link to comment
3 hours ago, MajorTomG said:

@Trinity You seem to have had exactly the same issue that I'm having. My farm is farming, my current blockchain status is "Full Node Synced", but my Wallet status is "Not Synced". Did you or anyone else find a solution in the end?

 

image.png.618552f87b98129c06a335c16370218a.png

image.png.520d4c7ba3cc76f1e1007cbadba8247b.png

image.png.52c673a47c9b2392bde0ce3958347364.png

 

Only thing I can think of is that I created a wallet on the Windows client, and added it to the docker via the mnemonic.txt method, rather than generating in the docker its self.

 

 

Sadly I didn't solve it yet. I ordered a pi4 now and will change my setup, so I'll run the full node on that pi device and using my unraid server for harvesting only. I tried fixing the clock, rebuild the container alot of times but my wallet was never synced since ever. I have no idea, what the ploblem is. I wanted to post my log here, but right know it's again way to large to be helpful. Tonight I will rebuild the container again, clear the log and let it run for some time and see, if I can get back to that point, where the node is fully synced and the farmer is running. If my wallet is then still not synced I will post my log files here and hope someone here has an idea, what the actual issue is. :)

  • Like 1
Link to comment
Posted (edited)
12 minutes ago, Trinity said:

If everything is in sync, don't worry. For me fixing the time caused a lot of trouble. I still need to confirm it, but there is a change I lost 2 days of plotting due to chaning the clock, because right now the node is not seeing any of the plots I created since I changed the clock, so better don't do it, especially if everything works!

Thanks! Hope it will be fixed soon.

Edited by abra8c
Link to comment
2 hours ago, Trinity said:

Thank you so much, for the detailed reply :)
I ordered the pi4 with your recommended case :D I hope I get it installed easily ^^
May I ask if it would be okay to contact you, in case of questions? ^^"
Generally it should work, I#ll use the sd card to install linux on the ssd and try to get it running - I hope I will be able to move my keys and everything from the docker container to the pi and it will still work. Not sure, if the plots are kind of bound to my keys or if I could just get new ones, since I am in the 0 XCH club anyway :D Right know, it seems, my node is not getting all my plots right, maybe I broke something by changing the clock to my time zone, I still have some plots running, so I cant rebuild the container to get the default time back. In the worst case I have to delete 16 plots, if they are broken due to the wrong clock...

I hope the new setup will then just work, right now it's pretty frustrating. With every day it doesn't work, it will be less likely to ever win a coin :|

I can't add parity, since my system is full of HDDs, I am limited to 4x3,5" and thus I don't want to use a quarter of my available space for parity. It's risky, but I just hope the NAS HDDs are just working fine.


One thing I still didn't understand is the plotting with the -f <farmer key> - What key is that? The one the full node prints out when calling chia keys show? Or the one from the harvester container I'll be plotting at?


 

No problem, PM me if you find issues in setting the architecture up.

For the Pi installation, I chose Ubuntu Server 20.04.2 LTS. You may chose Raspbian also, that's your choice.

If you go for Ubuntu, this tutorial is just fine : https://jamesachambers.com/raspberry-pi-4-ubuntu-20-04-usb-mass-storage-boot-guide/ .

Note that apart from the enclosure, it's highly recommended to buy the matching power supply by Argon with 3.5A output. The default RPi4 PSU is 3.1A, which is fine with the OS on a SD card, but adding a SATA M.2 SSD draws more current, and 3.5A avoids any risk of instability. The Canakit 3.5A power adapter is fine also, but not available in Europe for me. 

The general steps are : 

- flash a SD card with Raspbian

- boot the RPi from the SD Card and update the bootloader (I'm on the "stable" channel with firmware 1607685317 of 2020/12/11, no issue)

- for the moment, just let the RPi run Raspbian from the SD card

- install your SATA SSD in the base of the enclosure (which is a USB to M.2 SATA adapter), and connect it to your PC

- flash Ubuntu on the SSD from the PC with Raspberry Pi Imager (or Etcher, Rufus, ...)

- connect the SSD base to the rest of the enclosure (USB-A to USB-A), and follow the tutorial from "Modifying Ubuntu for USB booting"

- shutdown the RPi, remove he SD Card, and now you should boot from the SSD.

 

By default, the fan is always on, which is useless, as the Argon One M.2 will passively cool the RPi just fine. You have to install either the solution proposed by the manufacturer (see the doc inside the box), not sure it works with Ubuntu. There's also a raspberry community package which is great and installs on all OSes, including Ubuntu, see  https://www.raspberrypi.org/forums/viewtopic.php?f=29&t=275713&sid=cae3689f214c6bcd7ba2786504c6d017&start=250 . The install is a piece of cake, and the default params work well.

 

Once the RPi is setup and the fan management of the case installed, just install chia, and then try to copy the whole ~/.chia directory from the container (mainnet directory in appdata/chia share) onto the RPi. Remove the plots directory from the config.yaml as the local harvester on the RPi will be useless anyway. That should preserve your whole configuration and keys, and above all your sync will be much faster, as you won't start from the very beginning of the chain. Run 'chia start farmer' and check the logs. Connect the Unraid harvester from the container as already explained, and check the connection in the container logs. At that stage, you should farm from the RPi the plots stored in the Unraid array through the harvester in the container.

 

To see your keys, just type 'chia keys show' in the RPi, it shows your farmer public key (fk) and pool public key (pk). With these two keys, you can create a plot on any machine, even outside your LAN. Just run 'chia plots create <usual parameters> -f fk -p pk'. It signs the plot with your keys, and only a farmer with these keys can farm them. Once the final plot is copied into your array, it will be seen by the harvester, and that's all.

  • Like 2
Link to comment

Thank you again for the very detailed answer! I can't wait to get my hands on the pi, I hope I get it installed - Some things you explain sound pretty complex, like getting the OS on the SSD. I thought I could boot from the SD ans install it on the SSD like in normal computers xD Not sure how to flash the SSD on my PC without a external m.2 port ^^
 

1 hour ago, Gnomuz said:

Note that apart from the enclosure, it's highly recommended to buy the matching power supply by Argon with 3.5A output. The default RPi4 PSU is 3.1A, which is fine with the OS on a SD card, but adding a SATA M.2 SSD draws more current, and 3.5A avoids any risk of instability. The Canakit 3.5A power adapter is fine also, but not available in Europe for me. 

Oh, I saw that there are bundles at berrybase and ordered it, which includes the official power supply. I searched for the two mentioned ones, but I only found them on amazon, but they are not available. So it seems like, those are pretty hard to get? Sitting in Europe as well, so no idea, where to get it. What if I use the official one, will it give me a headache? What could I do, if I don't find any store selling those PSUs?

Link to comment
Posted (edited)

I try to follow all replies here, but different answers seem to be contradicting - is fixing the timezone solving the sync issue for most of the cases?

 

And a supporting question: I copied all plots to unraid share, but I'm still using the full node on my PC to farm these plots (via network directory on unraid) until the docker will sync. What will happen when 2 devices with the same synced full node (the same mnemonic) will farm the same plots? I assume that I will not have 2x chance to win...

Edited by unririd
Link to comment
Posted (edited)

Hi!

For some reason, docker uses an outrageous amount of RAM : (

In one plotting with the -b 4000 parameter set, the following picture is observed:

 

PnJbwu9.png

 

h8gfWzM.png

 

CYL6KB8.png

 

Can someone explain why?

Edited by funstuk
Link to comment

Something weird is going on with my logs. I changed my config.yaml file to the following:

 

logging: &id001
    log_filename: log/debug.log
    log_level: INFO
    log_maxfilesrotation: 7
    log_stdout: true

 

I can see INFO items in the log when I'm open the log via the container's icon. But when I look at debug.log there is nothing there other than the inital warnings before I made changes to the .yaml file.

 

Is there another place I should be looking for the logs?

 

Link to comment
1 hour ago, unririd said:

I try to follow all replies here, but different answers seem to be contradicting - is fixing the timezone solving the sync issue for most of the cases?

 

And a supporting question: I copied all plots to unraid share, but I'm still using the full node on my PC to farm these plots (via network directory on unraid) until the docker will sync. What will happen when 2 devices with the same synced full node (the same mnemonic) will farm the same plots? I assume that I will not have 2x chance to win...

I am still not deep enough into the whole topic, but I for myself can say: Adjusting the clock didn't fix the sync issue. For me it's even worse, the syncing stopped completely. Right now I am far behind the blockchain, which makes farming impossible. Will have to recreate the container.
That's why I still don't understand the official FAQ which states, that the clock shouldn't be more than 5 minutes off. At least for the docker container it seems to be wrong. Or there are additional issues, the docker container have. Because of that I plan to move away from it and don't use the chia docker as a full node for now.

Running two full nodes could be possible, but not with the same ports and keys. And maybe also not with the same plots. I don't know if the plots are portable, I think it's possible to migrate them to another key, but they can not be used of two farmers at the same time. At least it would make sense like that :D
Not sure, what will happen if there are two full nodes with the same keys and ports running in the network at the same time, but I would expect it will disturb each other.
 

 

1 hour ago, adminmat said:

I can see INFO items in the log when I'm open the log via the container's icon. But when I look at debug.log there is nothing there other than the inital warnings before I made changes to the .yaml file.


The log_stdout: true writes the logs into the standard output, so it is no longer logging into a file. So of course the log file is empty. If your unraid is not grabbing the stdout and writes logs itself, there's no file to find the logs. If I am not wrong ^^"
 

Link to comment
8 minutes ago, Trinity said:

Adjusting the clock didn't fix the sync issue.

 

the clock chia is using is not synced to my timezone. But I'm still syncing. The time shows UTC in the logs. I thought that was normal.

Link to comment
1 hour ago, adminmat said:

Something weird is going on with my logs. I changed my config.yaml file to the following:

 

logging: &id001
    log_filename: log/debug.log
    log_level: INFO
    log_maxfilesrotation: 7
    log_stdout: true

 

I can see INFO items in the log when I'm open the log via the container's icon. But when I look at debug.log there is nothing there other than the inital warnings before I made changes to the .yaml file.

 

Is there another place I should be looking for the logs?

 

log_stdout: true means it no longer sends logs to  log/debug.log, but instead dockers logging mechanism itself (log icon on container)

 

This is how you can view logs in console

  

On 5/14/2021 at 8:59 AM, tjb_altf4 said:

For anyone that has moved logs to stdout and is having issues using grep on logs, this is the command you need:


docker logs chia 2>&1 | grep YOURKEYWORD

 

 

or simply "docker logs chia" if you want to see everything

  • Like 2
Link to comment
Posted (edited)
21 minutes ago, tjb_altf4 said:

log_stdout: true means it no longer sends logs to  log/debug.log, but instead dockers logging mechanism itself (log icon on container)

 

This is how you can view logs in console

  

 

or simply "docker logs chia" if you want to see everything

 

Thanks. Is there a way to download this file so I can look at it in a text editor? or do you know how I can navigate to the log file?

 

When I use those commands it prints thousands of lines and crashes my terminal.

 

 

Edited by adminmat
Link to comment
17 minutes ago, adminmat said:

 

Thanks. Is there a way to download this file so I can look at it in a text editor? or do you know how I can navigate to the log file?

 

When I use those commands it prints thousands of lines and crashes my terminal.

You can filter and export to file using docker logs chia 2>&1 | grep ERROR > chia_log_error.txt

The path to the containers full log can be found by running this command: docker inspect chia | grep log

Link to comment
Posted (edited)
30 minutes ago, tjb_altf4 said:

You can filter and export to file using docker logs chia 2>&1 | grep ERROR > chia_log_error.txt

The path to the containers full log can be found by running this command: docker inspect chia | grep log

Thanks that was helpful. I was able to download it. I noticed it's located in /var/lib/docker/containers/.....   Doesn't that mean it's held in memory? And being that it's 1.6GB I'm thinking that's taking up a decent amount of space.

 

Is there a way to assign the log file a different directory?

 

Edit: The log at that location is only showing INFO. It gives no information about the plotting phases. Which is what I was hoping for. I'm assuming there is a log somewhere that will allow you to analyze your plotting status. See how long each phase takes etc. How can I log this?

Edited by adminmat
Link to comment
8 hours ago, unririd said:

I try to follow all replies here, but different answers seem to be contradicting - is fixing the timezone solving the sync issue for most of the cases?

 

And a supporting question: I copied all plots to unraid share, but I'm still using the full node on my PC to farm these plots (via network directory on unraid) until the docker will sync. What will happen when 2 devices with the same synced full node (the same mnemonic) will farm the same plots? I assume that I will not have 2x chance to win...

I found out today if I am plotting I don't sync...then when i stop plotting it's syncs and when you plot again it stops again...so...seems chia can only do one thing at a time 

Link to comment
On 5/15/2021 at 11:06 AM, zer0ed said:

So I have swar plot manager running, had time to play with it today.

This setup isn't elegant and probably isn't following best practices but it works for me. Please be mindful of this, if you break something, lose plots, etc.. don't come cryin'

 

Open the docker console (click on the docker --> console) NOTE: not the unraid main console

We are going to pull swar's git and install its python dependencies.  Installing into /root/.chia in the docker which points to /mnt/user/appdata in unraid.

 


cd /root/.chia
git clone https://github.com/swar/Swar-Chia-Plot-Manager
cd Swar-Chia-Plot-Manager
/chia-blockchain/venv/bin/pip install -r requirements.txt
cp config.yaml.default config.yaml

 

You can now edit the config.ymal file using an editor supported within this docker OR from the appdata/chia/Swar-Chia-Plot-Manager folder within unraid (unraid console, krusader, a windows share if you set that up)

 

Here are some values to be used along with whatever else you set in the config...


chia_location: /chia-blockchain/venv/bin/chia

folder_path: /root/.chia/logs/plotting

  temporary_directory: /plotting
  destination_directory: /plots

 

now test if it's working


. /chia-blockchain/activate
python manager.py view

 

make sure the drives look like they have the correct space and used space values (if not then your probably mapping to a folder inside your docker image. /plotting and /plots are the mappings used during the default chia docker setup, that's why we used them here.  If you start a chia plotting process and these aren't right you will fill your docker to 100% usage!  If you're running "fix common problems" plugin, you will see warnings in the unraid GUI.  You'll have to clean up the mess you made buy deleting whatever incorrect folders you created in the docker.

 

CTRL+C to get out of view mode

If everything looks good lets start swar manager

 


python manager.py start

 

Now whenever you want to use the swar manager open the chia docker console and view (or replace which ever command you need.. start, restart). You need to activate the python virtual environment everytime before your manager.py command as stated by the swar documentation. This is the second line you see here..

 


cd /root/.chia/Swar-Chia-Plot-Manager/
. /chia-blockchain/activate
python manager.py view

 

Want to use the main unraid console instead of being stuck inside the docker console, heck even use tmux?  Do the following then repeat the commands directly above...

 


docker exec -it chia bash

 

Enjoy!
I look forward to any suggestions for improvement, I'm sure there are better methods. 

 

image.png.cb6e82691d99349df9bb07a0277afe5e.png

 

when trying to set this up it doesnt show me any drives im pretty sure its right in the .yaml

 

image.png.b340abe42802a9490fe7a6a477cabe35.png

 

 

Link to comment
Posted (edited)
9 hours ago, burgess22 said:

image.png.cb6e82691d99349df9bb07a0277afe5e.png

 

when trying to set this up it doesnt show me any drives im pretty sure its right in the .yaml

 

image.png.b340abe42802a9490fe7a6a477cabe35.png

 

 

 

Don't start the manager until you figure this out or else you will fill the docker to 100%.  It's irreversible, you won't mess up plots or anything. Just you will have to clean up the mess made to gain back free space.

 

Not sure what's going on there, the /plotting and /plots directories should point to their respective directories in unraid (outside of the docker).  You could put a fake file or folder in there from outside of the docker then see if you can see that file from within the docker at /plots and /plotting.

 

cd /plotting
ls

 

what's your output of the df command

Edited by zer0ed
Link to comment

SYNC ISSUES?

 

For the fullnode sync issues many people are having with the docker, I've moved my farmer to a separate ubuntu server VM with dedicated resources 4 core, 2 GB RAM. Shortly after it started syncing like a dream and has been staying in sync while plotting on the docker and using it as a harvester.  Watching htop from within the vm, I have seen it use 100% of 3 cores at times while syncing and it's just being a fullnode, farmer, wallet (no plotting or harvesting).

 

The full node not only maintains a copy of the blockchain, but also validates it which I assume is fighting for resources when plotting from the same machine. I can't be sure, but I've seen a few people mention in the chia reddit suggest not to plot from the same machine as the full node when they were having sync issues.

 

After I moved my fullnode to the vm, changed the forwarded ports on the router to the new ip (don't forget about that).. it only took a couple hours to catch up the sync and has been holding steady.  More info about the different jobs (farmer, harvester, wallet, full node) can be found here: https://github.com/Chia-Network/chia-blockchain/wiki/Network-Architecture  Help on how to setup different machines to do different jobs: https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines.

 

After all this I like to use Chia Harvest Graph to keep an eye on challenge response times on my harvester.  You can check the logs with INFO level set but this is a lot easier on the eyes :)  Just follow the install instructions on the github, it should build fine in the docker without having to apt-get anything.

 

 

  • Thanks 1
Link to comment
Posted (edited)
9 hours ago, burgess22 said:

image.png.cb6e82691d99349df9bb07a0277afe5e.png

 

when trying to set this up it doesnt show me any drives im pretty sure its right in the .yaml

 

image.png.b340abe42802a9490fe7a6a477cabe35.png

 

 

Also, to confirm, it works without issue for me.

 

Just to rule out the obvious, you have identified those paths (/plots and /plotting) inside the docker template, right? :)

 

Edited by DoeBoye
Link to comment

I am giving up on getting it to work until the devs fix the syncing issues with the chia docker image.

I have left it running for 3 days now with full permissions, ports opened, everything setup like in the decumentations and troubleshooting, but my sync always stays behind approximately 3h after the real time.

fixing the date issues inside the container didn't help. I think the docker image just has syncing bugs for some. I stick with the windows version for now which works without any issues, even without opening the ports (since those are opened for tehunraid server).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.