Chia farming, plotting; array and unassigned devices


Shunz

Recommended Posts

1 hour ago, sota said:

guess I need to look at using madmax.

 

It is the future for sure, way simpler since you don't need a ton of parrell plots and worry about stagger etc. You just need a lot of ram and a reasonable speed drive / raid array and you can max out your system.

 

It also doesn't burn SSD's for no reason. I am curious how many writes it does to an ssd if using it as temp 1 actually. I am guessing only a few hundred gb based on drive activity I am seeing.

 

Cuda mining will speed it up even faster, I could see 10 min plots being possible.

 

Someone will get a bunch of quaros where they can share vram and get the 110gb it needs and do the intensive part of the plotting in vram with the rest in a ramdisk, just you wait.

Edited by TexasUnraid
Link to comment

Ok, here is with the drives juggled around to get the full bandwidth (1.5gb/s). Turned out to be harder then a thought, the onboard intel chepset sas controller seems to be limited to 1gb/s as well. So had to break the drives up between the jbod and the onboard.

 

38 mins

 

Interestingly, the writes to the drives do not seem to be a bottleneck. The reads on the other hand are a noticeable bottleneck but they only last a few mins in the process.

 

Number of Threads: 32
Number of Buckets: 2^7 (128)
Pool Public Key:  
Farmer Public Key: 
Working Directory:   /media/chia/300gbx10/
Working Directory 2: /media/chia/ramdisk/
Plot Name: 
[P1] Table 1 took 14.5279 sec
[P1] Table 2 took 130.336 sec, found 4294943972 matches
[P1] Table 3 took 152.122 sec, found 4294882415 matches
[P1] Table 4 took 185.527 sec, found 4294795945 matches
[P1] Table 5 took 180.186 sec, found 4294543727 matches
[P1] Table 6 took 177.022 sec, found 4294156500 matches
[P1] Table 7 took 136.853 sec, found 4293318270 matches
Phase 1 took 976.595 sec
[P2] max_table_size = 4294967296
[P2] Table 7 scan took 9.91041 sec
[P2] Table 7 rewrite took 43.7098 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 32.0883 sec
[P2] Table 6 rewrite took 52.855 sec, dropped 581341377 entries (13.538 %)
[P2] Table 5 scan took 30.9779 sec
[P2] Table 5 rewrite took 50.1201 sec, dropped 762035006 entries (17.7443 %)
[P2] Table 4 scan took 31.7755 sec
[P2] Table 4 rewrite took 49.5515 sec, dropped 828925070 entries (19.3007 %)
[P2] Table 3 scan took 30.7593 sec
[P2] Table 3 rewrite took 49.5533 sec, dropped 855100275 entries (19.9097 %)
[P2] Table 2 scan took 39.4037 sec
[P2] Table 2 rewrite took 49.8204 sec, dropped 865576918 entries (20.1534 %)
Phase 2 took 492.669 sec
Wrote plot header with 268 bytes
[P3-1] Table 2 took 63.8108 sec, wrote 3429367054 right entries
[P3-2] Table 2 took 47.5852 sec, wrote 3429367054 left entries, 3429367054 final
[P3-1] Table 3 took 57.6521 sec, wrote 3439782140 right entries
[P3-2] Table 3 took 45.0558 sec, wrote 3439782140 left entries, 3439782140 final
[P3-1] Table 4 took 58.735 sec, wrote 3465870875 right entries
[P3-2] Table 4 took 47.2709 sec, wrote 3465870875 left entries, 3465870875 final
[P3-1] Table 5 took 57.7979 sec, wrote 3532508721 right entries
[P3-2] Table 5 took 49.3429 sec, wrote 3532508721 left entries, 3532508721 final
[P3-1] Table 6 took 67.6292 sec, wrote 3712815123 right entries
[P3-2] Table 6 took 52.2177 sec, wrote 3712815123 left entries, 3712815123 final
[P3-1] Table 7 took 70.2158 sec, wrote 4293318270 right entries
[P3-2] Table 7 took 61.5818 sec, wrote 4293318270 left entries, 4293318270 final
Phase 3 took 684.917 sec, wrote 21873662183 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 106.816 sec, final plot size is 108814036734 bytes
Total plot creation time was 2261.09 sec

 

Using SSD's could cut another min or 2 off the time.

 

Gonna mess around with buckets and threads now to see the effects.

Edited by TexasUnraid
Link to comment

Here is a plot with 64 threads instead of 32, a bit faster in some places but ended slightly longer then the 32 thread version

 

Number of Threads: 64
Number of Buckets: 2^7 (128)
Pool Public Key:   
Farmer Public Key: 
Working Directory:   /media/chia/300gbx10/
Working Directory 2: /media/chia/ramdisk/
Plot Name: 
[P1] Table 1 took 14.4823 sec
[P1] Table 2 took 128.733 sec, found 4294885159 matches
[P1] Table 3 took 153.65 sec, found 4294888707 matches
[P1] Table 4 took 187.556 sec, found 4294775742 matches
[P1] Table 5 took 185.006 sec, found 4294637853 matches
[P1] Table 6 took 180.037 sec, found 4294300654 matches
[P1] Table 7 took 140.701 sec, found 4293526906 matches
Phase 1 took 990.183 sec
[P2] max_table_size = 4294967296
[P2] Table 7 scan took 10.4494 sec
[P2] Table 7 rewrite took 45.1316 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 33.978 sec
[P2] Table 6 rewrite took 49.6236 sec, dropped 581386783 entries (13.5386 %)
[P2] Table 5 scan took 29.8368 sec
[P2] Table 5 rewrite took 48.2361 sec, dropped 762058741 entries (17.7444 %)
[P2] Table 4 scan took 31.1902 sec
[P2] Table 4 rewrite took 47.874 sec, dropped 828911528 entries (19.3005 %)
[P2] Table 3 scan took 29.7129 sec
[P2] Table 3 rewrite took 47.3252 sec, dropped 855143082 entries (19.9107 %)
[P2] Table 2 scan took 40.4225 sec
[P2] Table 2 rewrite took 47.8694 sec, dropped 865569814 entries (20.1535 %)
Phase 2 took 482.865 sec
Wrote plot header with 268 bytes
[P3-1] Table 2 took 68.7052 sec, wrote 3429315345 right entries
[P3-2] Table 2 took 48.2843 sec, wrote 3429315345 left entries, 3429315345 final
[P3-1] Table 3 took 57.744 sec, wrote 3439745625 right entries
[P3-2] Table 3 took 48.4834 sec, wrote 3439745625 left entries, 3439745625 final
[P3-1] Table 4 took 56.8101 sec, wrote 3465864214 right entries
[P3-2] Table 4 took 45.8199 sec, wrote 3465864214 left entries, 3465864214 final
[P3-1] Table 5 took 63.5867 sec, wrote 3532579112 right entries
[P3-2] Table 5 took 53.4874 sec, wrote 3532579112 left entries, 3532579112 final
[P3-1] Table 6 took 69.4337 sec, wrote 3712913871 right entries
[P3-2] Table 6 took 54.0949 sec, wrote 3712913871 left entries, 3712913871 final
[P3-1] Table 7 took 72.2751 sec, wrote 4293526906 right entries
[P3-2] Table 7 took 61.1901 sec, wrote 4293526906 left entries, 4293526906 final
Phase 3 took 706.632 sec, wrote 21873945073 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 107.439 sec, final plot size is 108816105264 bytes
Total plot creation time was 2287.21 sec

 

Next up buckets.

Link to comment

Tried 32 buckets, was much slower so cancled.

 

Here is 64 buckets, slightly slower.

 

Number of Threads: 32
Number of Buckets: 2^6 (64)
Pool Public Key:   
Farmer Public Key: 
Working Directory:   /media/chia/300gbx10/
Working Directory 2: /media/chia/ramdisk/
Plot Name: plot-k32-2021-06-12-13-12-
[P1] Table 1 took 14.4438 sec
[P1] Table 2 took 144.622 sec, found 4294986549 matches
[P1] Table 3 took 178.015 sec, found 4294939794 matches
[P1] Table 4 took 247.549 sec, found 4294893962 matches
[P1] Table 5 took 248.196 sec, found 4294850013 matches
[P1] Table 6 took 239.747 sec, found 4294724146 matches
[P1] Table 7 took 162.541 sec, found 4294528449 matches
Phase 1 took 1235.13 sec
[P2] max_table_size = 4294986549
[P2] Table 7 scan took 9.32583 sec
[P2] Table 7 rewrite took 44.9001 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 34.0039 sec
[P2] Table 6 rewrite took 50.9818 sec, dropped 581287507 entries (13.5349 %)
[P2] Table 5 scan took 30.7546 sec
[P2] Table 5 rewrite took 48.4746 sec, dropped 761975514 entries (17.7416 %)
[P2] Table 4 scan took 31.4604 sec
[P2] Table 4 rewrite took 48.2638 sec, dropped 828823335 entries (19.2979 %)
[P2] Table 3 scan took 31.1949 sec
[P2] Table 3 rewrite took 48.3881 sec, dropped 855070966 entries (19.9088 %)
[P2] Table 2 scan took 39.4885 sec
[P2] Table 2 rewrite took 48.5345 sec, dropped 865582338 entries (20.1533 %)
Phase 2 took 488.326 sec
Wrote plot header with 268 bytes
[P3-1] Table 2 took 63.4384 sec, wrote 3429404211 right entries
[P3-2] Table 2 took 44.863 sec, wrote 3429404211 left entries, 3429404211 final
[P3-1] Table 3 took 57.2278 sec, wrote 3439868828 right entries
[P3-2] Table 3 took 45.8666 sec, wrote 3439868828 left entries, 3439868828 final
[P3-1] Table 4 took 56.8793 sec, wrote 3466070627 right entries
[P3-2] Table 4 took 44.9251 sec, wrote 3466070627 left entries, 3466070627 final
[P3-1] Table 5 took 56.9693 sec, wrote 3532874499 right entries
[P3-2] Table 5 took 45.4267 sec, wrote 3532874499 left entries, 3532874499 final
[P3-1] Table 6 took 61.3095 sec, wrote 3713436639 right entries
[P3-2] Table 6 took 49.1649 sec, wrote 3713436639 left entries, 3713436639 final
[P3-1] Table 7 took 70.3762 sec, wrote 4294528449 right entries
[P3-2] Table 7 took 56.8524 sec, wrote 4294528449 left entries, 4294528449 final
Phase 3 took 660.331 sec, wrote 21876183253 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 104.987 sec, final plot size is 108829650727 bytes
Total plot creation time was 2488.86 sec

 

Edited by TexasUnraid
Link to comment
3 minutes ago, sota said:

playing with 'max now.  to give it everything I'd need to shut down all 3 plotters and put everything back into the one machine, so i'm holding off on that for the moment.

 

Seeing as madmax plotter is CPU limited, you might get better performance leaving the setup you have now actually if you are not able to plot on each machine to get the total CPU power out of all of them.

 

You don't have to use a ramdisk with madmax, it works with normal disks as well but you will be disk IO limited naturally. Still faster then the stock plotter.

Link to comment

and last test I could come up with until the new ram shows up, 256 buckets. Same time as the 128 buckets.

 

Seems madmax is pretty well optimized, you set it and forget it lol.

 

Number of Threads: 32
Number of Buckets: 2^8 (256)
Pool Public Key:   
Farmer Public Key: 
Working Directory:   /media/chia/300gbx10/
Working Directory 2: /media/chia/ramdisk/
Plot Name: 
[P1] Table 1 took 15.1229 sec
[P1] Table 2 took 138.694 sec, found 4295017933 matches
[P1] Table 3 took 159.277 sec, found 4294990274 matches
[P1] Table 4 took 179.729 sec, found 4295070075 matches
[P1] Table 5 took 174.857 sec, found 4295000949 matches
[P1] Table 6 took 170.619 sec, found 4295019399 matches
[P1] Table 7 took 146.804 sec, found 4295078917 matches
Phase 1 took 985.183 sec
[P2] max_table_size = 4295078917
[P2] Table 7 scan took 9.45483 sec
[P2] Table 7 rewrite took 45.7111 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 31.9759 sec
[P2] Table 6 rewrite took 51.6471 sec, dropped 581272676 entries (13.5336 %)
[P2] Table 5 scan took 30.7285 sec
[P2] Table 5 rewrite took 49.8006 sec, dropped 761958084 entries (17.7406 %)
[P2] Table 4 scan took 30.9325 sec
[P2] Table 4 rewrite took 49.0456 sec, dropped 828924580 entries (19.2994 %)
[P2] Table 3 scan took 31.4239 sec
[P2] Table 3 rewrite took 48.7224 sec, dropped 855076257 entries (19.9087 %)
[P2] Table 2 scan took 41.9014 sec
[P2] Table 2 rewrite took 48.3117 sec, dropped 865606515 entries (20.1537 %)
Phase 2 took 492.633 sec
Wrote plot header with 268 bytes
[P3-1] Table 2 took 67.1143 sec, wrote 3429411418 right entries
[P3-2] Table 2 took 46.8924 sec, wrote 3429411418 left entries, 3429411418 final
[P3-1] Table 3 took 59.1986 sec, wrote 3439914017 right entries
[P3-2] Table 3 took 45.9073 sec, wrote 3439914017 left entries, 3439914017 final
[P3-1] Table 4 took 61.5261 sec, wrote 3466145495 right entries
[P3-2] Table 4 took 46.2073 sec, wrote 3466145495 left entries, 3466145495 final
[P3-1] Table 5 took 61.33 sec, wrote 3533042865 right entries
[P3-2] Table 5 took 47.7984 sec, wrote 3533042865 left entries, 3533042865 final
[P3-1] Table 6 took 65.2404 sec, wrote 3713746723 right entries
[P3-2] Table 6 took 49.4234 sec, wrote 3713746723 left entries, 3713746723 final
[P3-1] Table 7 took 70.0219 sec, wrote 4295078917 right entries
[P3-2] Table 7 took 57.5985 sec, wrote 4294967296 left entries, 4294967296 final
Phase 3 took 685.036 sec, wrote 21877227814 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 104.109 sec, final plot size is 108835830348 bytes
Total plot creation time was 2267.07 sec

 

Edited by TexasUnraid
Link to comment

So think I have settled on a setup until the extra ram gets in and I move to a ramdisk, just going to leave the destination blank so the files stay in the temp1 folder and then manually move them over to my farming drive (most liekly setup an automated rsync job for this later) so that it does not delay the next plot. Not sure how I will do this with the ramdisk, might have to setup a watch task.

 

Running 32 threads and 128 buckets, 110gb ramdisk for temp 2 and 10x sas 10k drives in raid0 for temp1.

Edited by TexasUnraid
Link to comment

Guess the only way to know if a ramdisk on a single machine or drives on multiple is better is to try it.

 

Have a feeling since it was already CPU limited with the current setup that running multiple machines is going to be the best option since it gives more raw CPU power.

 

I would still swap over to madmax with multiple machines though, others tested it using disks only and saw a significant improvement.

 

Can't hurt to try both ways though.

Link to comment

Setup a basic watch script that automatically moves finished plots into my farm drive, not perfect since I have to manually update the name of the drive when I move to a new one but good enough for now.

 

Can confirm it is doing a plot every 38-40 mins with the plots coping in the background. On track for 36 plots a day on this one machine and should be able to do a bit better then that on my main server if the docker is able to utilize both CPU's.

Link to comment

tested it in a docker on unraid, after some tweaking got it down to 34 mins but the HDD's are slower as I only have 3 of them in raid so a bit bottlenecked there.

 

Also have the docker penalty, still not bad.

 

With the price of chia dropping not sure it is worth it to really put anymore effort into chia, this has been fun with the spare server but think I will just let to plot alone for now and see how things pan out.

 

Gonna try corepool if they open registration again soon, otherwise I will try hpool.

 

If I could just make back the $100 I spent on chia I would be happy at this point.

Link to comment

well this has been a not fun time.

went to check stuff this morning, and found the farm "rack" off.

tripped the GFCI on the circuit everthing was plugged into.  swell.

also, i'm having problems running madmax; testing on one of the servers, it randomly dies in the middle of a plot.  no errors shown on the screen to tell me why.

 

Link to comment

and regarding pricing, i'm of the opinion that this is THE time to get as many plots up and running.  lower coin price will naturally mean more people will bail out. that can only improve the odds for those that remain.  I doubt we'll see any major farms go belly up, but who knows?  if their money backers get nervous, they could start pulling plugs.  I'm going to stay the course, get the 15-block of 4TB drives I've slated for plots done, then re-evaluate the way they're powered up/connected (this MD1000 is a goddamn power HOG) and plan on just letting the plots run for a decent period of time.

Link to comment
3 minutes ago, sota said:

well this has been a not fun time.

went to check stuff this morning, and found the farm "rack" off.

tripped the GFCI on the circuit everthing was plugged into.  swell.

also, i'm having problems running madmax; testing on one of the servers, it randomly dies in the middle of a plot.  no errors shown on the screen to tell me why.

 

 

Odd, something short out?

 

I have had no issues on linux, seen people complain about using it on windows. I think someone compiled a windows version?

Link to comment
1 minute ago, sota said:

and regarding pricing, i'm of the opinion that this is THE time to get as many plots up and running.  lower coin price will naturally mean more people will bail out. that can only improve the odds for those that remain.  I doubt we'll see any major farms go belly up, but who knows?  if their money backers get nervous, they could start pulling plugs.  I'm going to stay the course, get the 15-block of 4TB drives I've slated for plots done, then re-evaluate the way they're powered up/connected (this MD1000 is a goddamn power HOG) and plan on just letting the plots run for a decent period of time.

 

Yeah, not sure what is going to happen. Not sure I see this as a viable solution for making a profit farming, they seem to be focused on the IPO profits with the price of chia / profits coming a long way behind.

 

I will most likely do similar to you, I already have a server on 24/7, might as well toss my old drives in there and let it ride. Course I only have around ~30-40tb of old drives.

 

I don't really see the netspace shrinking anytime soon either, I would not be surprised at all to hit 50-100eib before it stabilizes, that is what I am basing my numbers on.

 

at 50eib netspace each TB of plots is worth $1/month

at 100eib they are worth 0.50cents/month.

 

So about $15-$30/month for my spare drives. Not worth buying new drives that is for darn sure and at that price you have to start factoring power usage into any rigs you have to turn on just for chia.

 

Guess we will see what the netspace stabilizes at, what are you thinking?

Link to comment
9 hours ago, tjb_altf4 said:

IMO once pooling protocol kicks off it will be better, lack of an incremental reward at minimum is killing it for those not highly scaled.

Still its a nice learning period, no doubt netspace will go off when pools start up.

 

Oh yeah the netspace will take off for sure. I just don't know what to make of the coin as a whole. On one hand it has a lot of potential. On the other it seems all the company cares about is the IPO and making money from that end. They never saw this kind of explosion.

 

There are a LOT of old hard drives laying around that will be put into service for this, I just don't know if buying new drives will ever make sense. Without the ability to expand, profitability is pretty limited.

 

Link to comment

Yeah, you will be drive limited like that, you will need to increase the drive bandwidth to do much better.

Like I said before, you might be better off with parallel plotting like you were doing before, you can still use madmax but won't get the same gains as others with faster drives.

 

Since I am just using a single system right now and it has the memory for a ramdrive, the speeds are faster.

 

Although my optimized parallel speed was around ~32 plots a day but with a lot more hassle. Now I am doing 36 plots a day and way way simpler.

Edited by TexasUnraid
Link to comment
Just now, sota said:

... and I just had a 4TB drive *** itself apparently.

 

Well I guess at least it was not being used for anything important!

 

Well, I just did it, got Hpool installed and just starting farming. Slowly moving my plots over to a more long term home now that I don't need all the drives for parallel plotting.

 

Ended up installing hpool on windows using Sandboxie Plus I have used it for many years but it went open source recently and has really improved since then. I use it for anything I don't trust. Got it sequestered into a sandbox and seems to be working.

 

Now just need to make a new wallet so I can withdrawl to an uncompermised address but not a big deal since I can use the new one for re-plotting when official pools are released.

Link to comment

Freezer? That works? VERY interesting, never heard of that one.

 

That is a pretty good price for those drives, $7.90/tb, a little more then half what new shucked drives run (before chia).

 

Course you have to factor in the drives that die but you would have to loose 19 of them to equal the cost. Course with new drives you can get 12tb+ and they use less power and take up less space.

 

I just started my real deal storage server last year during lockdown with my stimulus check. So lucky enough to have all 12-14tb drives so far, takes up a lot less space.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.