Mr_4braham

Members
  • Posts

    13
  • Joined

  • Last visited

Posts posted by Mr_4braham

  1. On 5/21/2021 at 12:25 PM, Pducharme said:

    @guy.davis Hi! Just couple of newbie questions.  For the Plotting SSD, would you use SATA SSD, NVMe on a PCI-e card or NVMe on Mobo ?  I guess SATA SSD cheapest, but slower of the lot?   For the final Farming, would you all put that outside of the array using Unassigned devices or for the farming part, using the Free space of the array is good enough?

    What I am doing is: My machinaris container is moving freshly made plots from the plotting directory to a share called chia_faucet. My machinaris container is farming from chia_faucet and chia_farm. chia_faucet is a share set to cache only: chiafaucet (this is a pool consisting of a single disk, the disk I am currently filling up with plots). My chia_farm share is a read only share and it is set to cache only: chiafarm (this is a pool containing all disks I have already filled up with plots). So with this setup I do not need to be changing my machinaris container farming or plotting directories, once the disk I am currently filling up (the one assigned to the chiafaucet pool) gets full I simply stop the array and move this disk to the chiafarm pool, where it will continue to get farmed, except now it is read only so the plots are better protected againts overwrite. Then I assign the next disk I will be filling up to the chiafaucet pool.

    • Like 1
  2. 22 hours ago, JorgeB said:

    Unraid supports multiple pools, you can have multiple small pools of a few disks, how many I would feel comfortable with will depend mostly on the size of the disks, plots on each pool can all be on the same share, same one as the array if you also have plots there, as long as you adjust the use cache setting correctly, this way chia can access all the pots with a single entry, e.g.: /mnt/user/plots, and if you lose a disk in a pool you only lose the plots in that pool.

    I have not managed to assign more than one pool to a share during my testing, for me each share can only have one cache, and pools are presented as cache, not as disks, maybe I am missing something. Can you explain to me how to assign multiple pools to a share? And would that mean each pool is written to in order until it gets full instead of writing across all pools at the same time?

  3. 54 minutes ago, guy.davis said:

     

    Thanks!  Unfortunately, with Chia pools being a late add-on to the Chia protocol (months after initial release in March), there is no way to perform "Single-click Farming" right now.  You're right that the sequence is:

    1) Launch Machinaris, create/import your private mnemonic pass-phrase.

    2) Switch to Settings | Pools -> click 'Get Mojos' link, enter your public address to the Chia faucet 

    3) Wait days! for both the initial blockchain sync and the mojos to appear in your wallet.  

    4) Switch to Settings | Pools, once you have mojos, select a Pool and join.

    5) This creates a PlotNFT which you can then use to Plot with.  Can't plot until you have this "pool_contract_address".

     

    Chia devs are saying they are going to improve this in coming year.  I sure hope so...

     

    Thank you for clarifying this.

     

    55 minutes ago, guy.davis said:

    Yes, consensus seems to be that creating a new Pool in Unraid, with NO parity enabled, and adding your disks there, meaning a single volume mount into the Machinaris container, can work well and be a simple approach.  

     

    I'm using individual unassigned devices (5 only) and a bit of free space on my main array.  This also works, but is pain at 30 devices.

     

    I wouldn't recommend using your main Array to hold all your plots.  Plots shouldn't be protected by Parity (causing lost disk space).  Hope this helps!

     

    My point is that if you use a pool that is not the main array, things are gonna be written to it sparsely, so each plot you own will be divided between all or if not all a lot of the disks in the pool, and then in the event of a disk failure all or most of your plots will be lost, rebuilding plots from a single disk is no big deal but if you lose all or most of your plots them it is a problem. With the main array you can simply not use any parity disk and define in the share settings that you want to write to each disk in sequence, so your plots will not be spread across the disks, however you can only have one main array and the maximum amount of disks in it is 30, so it is not scalable. Using unassigned devices is scalable but a pain to manage as you pointed out.

    • Like 1
  4. @guy.davis Thank you for this awesome container. I am a chia noob and I am trying it out for the first time on Machinaris. I cannot join a pool until my blockchain is synced because my wallet will not sync to show my single mojo and be able to join the pool, is there a way to start plotting a pool valid plot without joining any pool? Another question, I plan on using unassgined devices for farming but I really like the idea of farming on an unraid pool because you can utilize 100% of each HD space, however I noticed that if I create a pool in single mode and create a share to use the pool as it's cache, with the option "Use cache pool (for new files/directories):" set to Only, when I move my plots to the pool, even though I defined in the share settings "Allocation method: Fill up" it splits the same plot between multiple drives, I think this is because that setting only applies to the main array disks that are assigned to the share, that means when one of them fails I will lose a lot or maybe all of my plots, is there a way to make unraid fill up each disk one by one sequentially? Or would using the main array the only possible way? I don't want to use the main array because I have more than 30 drives for chia and that would mean buying multiple unraid licenses and having to manage multiple OS'es (poor scaling).

  5. My router was assigning the same ip address for my raspberry pi and my unraid for some reason, that stopped me from acessing unraid through the web ui. I ssh`d unto unraid and executed a shutdown now command. Unraid didn't really shutdown, fans are still spinning and it is still drawing power from the ups, however I cannot remote into it anymore. I then found out this is not a proper way to shutdown unraid. I am trying to avoid doing a forced shutdown and having to do a parity check, is there a way to regain ssh access to unraid so I can power it down properly?