tjb_altf4

Members
  • Content Count

    875
  • Joined

  • Last visited

Community Reputation

151 Very Good

About tjb_altf4

  • Rank
    not great, not terrible

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

3044 profile views
  1. IMO once pooling protocol kicks off it will be better, lack of an incremental reward at minimum is killing it for those not highly scaled. Still its a nice learning period, no doubt netspace will go off when pools start up.
  2. Cloud based backup provided by LimeTech, instead of local backup like this plugin provides.
  3. Nope. This seems to be a perpetuated myth. I got my single plot times down from 9.5 hrs to 1 hr using NVME storage alone.
  4. Note I only have 64GB of RAM on my server, so this was a "normal" plot on nvme drives
  5. Just gave madmax a go... wow. (note: this seems legit, with few vectors for issues and lots of eyesballs on code, but please DYOR) So on the original chia plotter, with an over zealous 12C/6G allocation, it still took 9.5 hrs to plot. Throwing 24C at madmax plotter got this down to 1hr + 6min ! Lots of interesting optimizations to experiment with around lower core count and parallel jobs! Unfortunately I've run out of storage so more plotting will have to wait a little while.
  6. Apparently a docker build on the way, only ~5% performance drop compared to bare metal. Sounds like a good deal to me. K32 will be dead far sooner than devs anticipated, and certainly the final nail in the coffin will be the coming GPU acceleration from the mad max guys. If you're replotting for pools, I'd think twice about K32, just my 2c.
  7. Array is currently at 90TB + 1P, but 5TB of nvme and 6TB of UD bumps me over the 100TB mark! Looking to expand further, just need to work out an appropriate solution to house all the disks 🤣
  8. I had got one earlier too, good feedback and many units sold over a few years, so should be ok.
  9. You can save the pcie slot using the molex only variant (RES2CV240) on ebay. Seller offered to me at US$75, so would likely accept this price if you asked. https://www.ebay.com/itm/182419599540
  10. https://github.com/Chia-Network/chia-blockchain/releases/tag/1.1.7 nice to see fixes flowing quickly!
  11. tjb_altf4

    Confluence

    I set it up about 4 years ago, so I'm a bit fuzzy on the details, but went something like this: Install postgres create database and user in postgres install jira/bitbucket/confluence on jira/bitbucket/confluence startup screen, point setup to postgres database and user Most of the Atlassian stack is fussy about which postgres version you use, so make sure you check your on the right version.
  12. I see there's a new docker version, digging into the change history it looks like there are two great QOL changes specific to docker: TZ environmental var is now supported, logs can now be show with local timestamps! I'm 99% sure Unraid adds this already based on your Unraid settings, so no action should be needed to get this to work /chia-blockchain/venv/bin/ added to PATH environment, this should mean you can use "chia" instead of "venv/bin/chia"
  13. This is the official container, @Partition Pixel just made an unraid template for it. You can confirm by comparing the image name + registry in the template with the official one listed on their github. I've previously done what your asking, but the other way around, where I had a windows VM that had read only access via SMB to my unraid chia share. This works great for farming, but I wouldn't plot/harvest over SMB though. Of course you could just setup a docker on the remote unraid system as a harvester only and point it to your fullnode on the local unraid mach
  14. I have them on my array, though if multi-array support comes along I would move into a separate array. I would note that in this scenario you would want to be plotting outside of the array, and preferably not writing the final destination as the array... I find jobs can backup on the final transfer. If you are no longer plotting, none of that is of any consequence
  15. I found 1hr staggers are a good start point, I had good results from that. Once I got it down to 30min, but I had to build in a job cap of around 12 to keep the system happy (its still a NAS and app server for everything else!)