Jump to content

tjb_altf4

Members
  • Posts

    1,433
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by tjb_altf4

  1. Still an outstanding issue with USB passthrough on Hyper-V, impacted some colleagues last week. They just went with VirtualBox. VMWare player is another option too.
  2. Could the feature request poll for 6.11 be moved to a forum where all members can comment, not just My Server users ?
  3. Time to vote for inclusion in 6.11
  4. Aesthetically it bothers me, but functionally it does what its meant to.
  5. Your choices are really [easy] run base jedec speeds (2666?) [advanced] loosen timings and drop speed to 2933 or lower (there may be XMP profiles you can use) [expensive] buy a known working kit I've been using a gskill kit (F4-3200C14Q-64GVK) which uses bdie, so is able to run higher speeds more easily (stable for +3 years) on the older, more finicky 1950x. I'm also using 4x 16GB modules, to avoid loading up the ram controller with 2 banks. Downside is the fast bdie kits are expensive.
  6. Since upgrading my workstation (carbon), Fusion360 has not wanted to start and freezes on start up, no amount of reinstalls and fixes has resolved the seemingly common issue, so this has stopped progress on custom drive mounting solution However, I was still able to upgrade a little, my 2TB WD blue which died after a long arduous history, was finally pulled out, replaced with another spare WD Red 4TB from my old array. So now the scratch pool is a 3 disk raid0 pool (12TB), which amusingly is the size of my first Unraid array I built ~5 years ago. I have also enjoyed mucking around with Chia, so I've expanded into a dedicated pool of 2x 18TB. A bonus was these particular shucks didn't have PWDIS, which makes life easier. with no power mods needed. If multi-pools were here I would have made it a seperate pool, I'll look to convert when that option becomes available. Those of you that paid attention will note I now have 16 drives in a 15 drive chassis... well one drive got shoved down the side where the gap for the IO panel and cables is haha This brings the server up to ~150TB across array, pools and UD. I plan to move towards 0.25PB, once the new drive mounting arrangement is sorted and in place.
  7. I understand the hack violates the EULA, but I'm curious about how Hyper-V can now support GPU-P which accomplishes the same thing (on both Nvidia and AMD) without creating issues with licensing. We might get lucky and similar changes get adopted in KVM. Another video from Jeff for context đŸ˜›
  8. Can I request we have separate logging for the Mover ? Currently there is a choice of: no mover logging filling the syslog with mover logs The latter makes it difficult to troubleshoot issues, with most users told to disable mover logging, while the former obscures data movements in the system, which is not ideal for a NAS OS.
  9. iSCSI plugin in Community Applications for Unraid version 6.9+
  10. I've read that audio devices often need a dedicated usb controller passed to the VM to work
  11. Do the chia forks not have a docker image? Should be able to build yourself based on dockerfile for chia (maybe an older one)
  12. The carbon build now split out and continued below, updates to fortytwo hopefully coming in the next week or so...
  13. PCIe 4.0 x8 = PCIe 3.0 x16 (approximately) You will need to spend more on a riser to ensure it runs at PCIe4.0 speeds though, many will drop down to PCIe 3.0 I think you'd barely notice a difference in performance, if any... one area to watch, I believe the bottom slot goes though the chipset, so make sure you don't use that slot for gaming.
  14. Decided I'm far enough along in this project to break this build away from my fortytwo threadripper server build thread. If you've not see that build, follow this link My server build fortytwo, has acted as a fileserver, application server, and also some VM use... mostly headless windows VMs. Carbon is different, carbon will replace my current 7700K build which has been a traditional workstation and gaming rig, I work from home nearly fulltime, like to game, and have been using it as a homelab for my computer science degree studies for the last 4 years. I plan to use virtualisation daily, but with provision to run bare metal for some gaming activities that are currently anti-VM. Docker will also get a workout, but more so for development purposes and running some handy tools. As this build will be shutdown or put into standby when not in use, I felt more comfortable watercooling this build. Why not just a big ass noctua cooler? I would and have in the past, but space restrictions of moving to 4U make than an impossiblity, the 1950x has a NH-U9 which barely keeps up under load, the U14 it had previously (in a tower case) was great. Availability issues gave me an extra 12 months of saving for a new build, so I splashed out a little, the build details are: CASE: Rosewill 4U RSV-L4500U CPU: AMD 5950x 16 core 4.9GHz beast MOBO: Asus Crosshair VIII Dark Hero X570 RAM: 64GB G.Skill 3200MHz C14 (4x16GB) ARRAY: tbc POOL: 4TB NVME 2x 980 Pro 2TB NIC: Mellanox ConnectX-3 10GbE (carried over from old build) PSU: EVGA P2 1000W (carried over from old build) RIP PSU: EVGA G3 750W GPU: Asus TUF OC Nvidia 3090 So enough of the text, lets see some pictures Test fitment Designed a 3D printed bracket to allow the rad to sit snuggly up top, with cables and watercooling passing below, this also allows for a mix of expelled hot air and cool air from intake for the rest of the system. Watercooling plumbed up, testing with the GPU from old build Current state, 3090 installed, dead PSU replaced Yes dead PSU, after 5 years my EVGA P2 1000W died. It took a few hours of system instability to narrow it down to the PSU, initially I though it may have been XMP or mobo being to aggressive, but luckily I had a spare PSU to test with from a decommissioned build. What a relief to know it wasn't a new component. Just started the RMA process (7 year warranty!), hopefully I can get a replacement as it was otherwise a great PSU. Watercooling parts: Basically full house EKWB, their recent Magnitude CPU block was the only new part, everything else was from older decommisioned builds... lucky I'm a hoarder. Hard line was out of the question, I wanted easy maintenance as I'm always shuffling components, I love the ZMT tubing, and had some spare. I had some issues with mounting hardware (screws/bolts rounding), but EK sent replacement bits in under a week which is amazing considering I'm down here in Australia. Anyway I think it has turned out quite well, I can pump the CPU on Prime95 with temps in check with quiet fan profiles. Next steps are to do more stability testing, load up unraid and setup some VMs Eventually the GPU will be water cooled also, block is in hand, but as there are no space constraints on this, I'm in less of a rush.
  15. 3090 tuf (2.5 slot) on a dark hero leaves half a slot clearance, which is ok, but the strix is 2.75 slot wide, so you may want to consider a buy a different card to help with thermals. You can also consider mounting a fan in front of this space to help with airflow. If noise isn't an issue, you can use msi after burner to ramp up fans to keep area cooler. Ultimately you will need to move a lot of air in and out of the case to keep up. One scenario that might help with spacing,: 3070 on top, P2200 on bottom, 3080 on a riser somewhere it can expel heat more easily i.e. vertical mount I've watercooled my cpu, and will eventually do the same for the GPU to get space back and get better thermals.
  16. It's a disk share (pool share technically lol) as it has been globally excluded from user shares. The disk share happens to have a folder on it of the same name as a user share. The files in the folder "chia" on the Scratch pool do not appear in the user share "chia" (expected behavior), however it is included for "chia" user share size calculations (unexpected behavior).
  17. https://www.tomshardware.com/news/truenas-gets-into-chia-farming-truepool 1.23PB ? We need an Unpool @jonp đŸ˜› yes I know its not core business yadda yadda boring excuses
  18. I've linked the chia article that covers most of it, but the easy way to set this up is as a new full node using existing passphase (or use the key exchange method in wiki) with the extra fields filled out (harvester:true, farmer_address:IP, farmer_port:PORT). I think you might need to disable some services also in CLI, but see how you go. If later you want this docker to be the full_node, you should only need to change harvester field to false https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines
  19. If you only have one machine, why have a seperate harvester ? In case this is just a terminology mixup, if you are just wanting to have a seperate plot producing docker, it doesn't need any connections and you can use the CLI to shutdown all running services. Just make sure the plot final directory is one the other docker has access to
  20. So I noticed that a user share drive listing (used for computing share size) includes pools that are set to not participate in user shares, when they happen to have the same top level folder name as those other shares. This leads to incorrect size calculations. Note that the folder itself however does not appear in the SMB user share (as expected), and so this is more of a cosmetic issue that also impacts utilization calculations, rather than a functional problem with user share file/folder inclusions. User share list: Disk share list (pool not participating in user shares): Shares size compute detail:
  21. Officially abandoned ship on this on, after probably 6 months of only working 50% of the time and the "Unknown" movie bug giving me the shits, I've moved on. I was hoping to wait for SickChill to finish movie support, but that looks a while off yet, Ended up moving to Radarr and couldn't be happier, works flawlessly. Hopefully someone starts maintaining CP again, as it was quite good at what it did, when it worked.
  22. Received the replacement parts from EKWB (super fast turn around) and was able to finish the install, did some stress testing and was happy with the results. 16C/32T @ 100% for 15min and was silent nearly the whole time, with only a slight ramp up of fans towards the end... very happy!
  23. So we get the choice of a parity bottleneck (most free) or a single array disk bottleneck (fill up / highwater)
  24. I'm going to be honest, the pricing is now at the same level as it was a year or 2 ago (for those of us outside NA). You guys in the US just get spoiled by big retail and their insane discounting.
  25. EKWB has come to the party and has sent replacement mounting hardware, so should be able to run this up at 100% in about a weeks time. From there the migration from my existing baremetal install to unraid as a daily pc begins I'm also super happy with the radiator support bracket I've designed and printed, after 3-4 iterations it is a perfect fit. At a later stage I'll reprint in a more heat tolerant material, but PLA is fine for now. Now onto those other designs I need to finish...
Ă—
Ă—
  • Create New...