Jump to content

tjb_altf4

Members
  • Posts

    1,398
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by tjb_altf4

  1. Do the chia forks not have a docker image? Should be able to build yourself based on dockerfile for chia (maybe an older one)
  2. The carbon build now split out and continued below, updates to fortytwo hopefully coming in the next week or so...
  3. PCIe 4.0 x8 = PCIe 3.0 x16 (approximately) You will need to spend more on a riser to ensure it runs at PCIe4.0 speeds though, many will drop down to PCIe 3.0 I think you'd barely notice a difference in performance, if any... one area to watch, I believe the bottom slot goes though the chipset, so make sure you don't use that slot for gaming.
  4. Decided I'm far enough along in this project to break this build away from my fortytwo threadripper server build thread. If you've not see that build, follow this link My server build fortytwo, has acted as a fileserver, application server, and also some VM use... mostly headless windows VMs. Carbon is different, carbon will replace my current 7700K build which has been a traditional workstation and gaming rig, I work from home nearly fulltime, like to game, and have been using it as a homelab for my computer science degree studies for the last 4 years. I plan to use virtualisation daily, but with provision to run bare metal for some gaming activities that are currently anti-VM. Docker will also get a workout, but more so for development purposes and running some handy tools. As this build will be shutdown or put into standby when not in use, I felt more comfortable watercooling this build. Why not just a big ass noctua cooler? I would and have in the past, but space restrictions of moving to 4U make than an impossiblity, the 1950x has a NH-U9 which barely keeps up under load, the U14 it had previously (in a tower case) was great. Availability issues gave me an extra 12 months of saving for a new build, so I splashed out a little, the build details are: CASE: Rosewill 4U RSV-L4500U CPU: AMD 5950x 16 core 4.9GHz beast MOBO: Asus Crosshair VIII Dark Hero X570 RAM: 64GB G.Skill 3200MHz C14 (4x16GB) ARRAY: tbc POOL: 4TB NVME 2x 980 Pro 2TB NIC: Mellanox ConnectX-3 10GbE (carried over from old build) PSU: EVGA P2 1000W (carried over from old build) RIP PSU: EVGA G3 750W GPU: Asus TUF OC Nvidia 3090 So enough of the text, lets see some pictures Test fitment Designed a 3D printed bracket to allow the rad to sit snuggly up top, with cables and watercooling passing below, this also allows for a mix of expelled hot air and cool air from intake for the rest of the system. Watercooling plumbed up, testing with the GPU from old build Current state, 3090 installed, dead PSU replaced Yes dead PSU, after 5 years my EVGA P2 1000W died. It took a few hours of system instability to narrow it down to the PSU, initially I though it may have been XMP or mobo being to aggressive, but luckily I had a spare PSU to test with from a decommissioned build. What a relief to know it wasn't a new component. Just started the RMA process (7 year warranty!), hopefully I can get a replacement as it was otherwise a great PSU. Watercooling parts: Basically full house EKWB, their recent Magnitude CPU block was the only new part, everything else was from older decommisioned builds... lucky I'm a hoarder. Hard line was out of the question, I wanted easy maintenance as I'm always shuffling components, I love the ZMT tubing, and had some spare. I had some issues with mounting hardware (screws/bolts rounding), but EK sent replacement bits in under a week which is amazing considering I'm down here in Australia. Anyway I think it has turned out quite well, I can pump the CPU on Prime95 with temps in check with quiet fan profiles. Next steps are to do more stability testing, load up unraid and setup some VMs Eventually the GPU will be water cooled also, block is in hand, but as there are no space constraints on this, I'm in less of a rush.
  5. 3090 tuf (2.5 slot) on a dark hero leaves half a slot clearance, which is ok, but the strix is 2.75 slot wide, so you may want to consider a buy a different card to help with thermals. You can also consider mounting a fan in front of this space to help with airflow. If noise isn't an issue, you can use msi after burner to ramp up fans to keep area cooler. Ultimately you will need to move a lot of air in and out of the case to keep up. One scenario that might help with spacing,: 3070 on top, P2200 on bottom, 3080 on a riser somewhere it can expel heat more easily i.e. vertical mount I've watercooled my cpu, and will eventually do the same for the GPU to get space back and get better thermals.
  6. It's a disk share (pool share technically lol) as it has been globally excluded from user shares. The disk share happens to have a folder on it of the same name as a user share. The files in the folder "chia" on the Scratch pool do not appear in the user share "chia" (expected behavior), however it is included for "chia" user share size calculations (unexpected behavior).
  7. https://www.tomshardware.com/news/truenas-gets-into-chia-farming-truepool 1.23PB ? We need an Unpool @jonp 😛 yes I know its not core business yadda yadda boring excuses
  8. I've linked the chia article that covers most of it, but the easy way to set this up is as a new full node using existing passphase (or use the key exchange method in wiki) with the extra fields filled out (harvester:true, farmer_address:IP, farmer_port:PORT). I think you might need to disable some services also in CLI, but see how you go. If later you want this docker to be the full_node, you should only need to change harvester field to false https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines
  9. If you only have one machine, why have a seperate harvester ? In case this is just a terminology mixup, if you are just wanting to have a seperate plot producing docker, it doesn't need any connections and you can use the CLI to shutdown all running services. Just make sure the plot final directory is one the other docker has access to
  10. So I noticed that a user share drive listing (used for computing share size) includes pools that are set to not participate in user shares, when they happen to have the same top level folder name as those other shares. This leads to incorrect size calculations. Note that the folder itself however does not appear in the SMB user share (as expected), and so this is more of a cosmetic issue that also impacts utilization calculations, rather than a functional problem with user share file/folder inclusions. User share list: Disk share list (pool not participating in user shares): Shares size compute detail:
  11. Officially abandoned ship on this on, after probably 6 months of only working 50% of the time and the "Unknown" movie bug giving me the shits, I've moved on. I was hoping to wait for SickChill to finish movie support, but that looks a while off yet, Ended up moving to Radarr and couldn't be happier, works flawlessly. Hopefully someone starts maintaining CP again, as it was quite good at what it did, when it worked.
  12. Received the replacement parts from EKWB (super fast turn around) and was able to finish the install, did some stress testing and was happy with the results. 16C/32T @ 100% for 15min and was silent nearly the whole time, with only a slight ramp up of fans towards the end... very happy!
  13. So we get the choice of a parity bottleneck (most free) or a single array disk bottleneck (fill up / highwater)
  14. I'm going to be honest, the pricing is now at the same level as it was a year or 2 ago (for those of us outside NA). You guys in the US just get spoiled by big retail and their insane discounting.
  15. EKWB has come to the party and has sent replacement mounting hardware, so should be able to run this up at 100% in about a weeks time. From there the migration from my existing baremetal install to unraid as a daily pc begins I'm also super happy with the radiator support bracket I've designed and printed, after 3-4 iterations it is a perfect fit. At a later stage I'll reprint in a more heat tolerant material, but PLA is fine for now. Now onto those other designs I need to finish...
  16. Alternative option: If you don't use Emby and the gaming server simultaneously, you could set Emby up in a VM so you can utilize the GPU for hardware transcoding when not gaming. Or even setup emby server on your gaming VM.
  17. Found this plugin recently, would have saved my bacon many times over if I had of found it years ago.
  18. I've found windows to be pretty gracious in handling the iscsi disks coming online and offline, which often happens as I'm doing maintenance on the array.
  19. You can, but for a given PWM header, you can either let BIOS control it, or override it in the plugin. For instance, on my mobo the CPU fan controller is accessible in the plugin, I can override it, but I let bios manage it and it ramps up and down with usage/temp as you would expect. Perhaps you just need to split out your fans to different headers to get the balance of control you need?
  20. Moved back to latest today, latest build seems to be all good!
  21. Don't forget to replace all the cables, even if they seem compatible with the new PSU, they may be wired differently... or simply one of them might have been a contributing factor to the PSU failure
  22. I think there have been some solid efficiencies introduced in the new chia plotter, my CPU is barely trying with 10 parallel plots. It used to be at +80% usage, lets see how 14 jobs in parallel go haha
  23. Currently waiting on EKWB support to provide replacement hardware. When I dialed the mounting hardware in to the correct spec the mounting hardware turned to cheese, EK don't seem to provide spares with their blocks anymore (even their crazy priced ones). I'm pretty sure I'm stuck waiting a few weeks before the build can progress. I've done a test run at low mounting pressure and there is potential for some decent performance there, can't wait until I can mount it properly and use it anger! Currently the RGB colour puke is linked to CPU temp thresholds, but might have to switch to orange for the final iteration...
  24. I'll be super bummed if multi array isn't in the 6.10 release, I really want to isolate my chia adventures from my other data. Unraid itself has solved so many issues others have struggled to resolve without using 3rd party tools. FYI, you can mount drives into a folder, instead of as drive letter to get around the windows drive letter limitations.
  25. Had lots of problems with IO sending farm response times through the roof when I had my final directory on the array. Ended up creating a RAID0 array of 2x HDD as a faster "final" directory, then transferring to array with rsync + ionice. I am now running 4 plot transfers at a time with zero impacts to response times As long as all dirs are mapped to the chia app (and added in its config), there is never any plot downtime
×
×
  • Create New...