Einsteinjr
Members-
Posts
61 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
Einsteinjr's Achievements
Rookie (2/14)
21
Reputation
-
Is the assumption that this will be fixed in a new release?
-
Can confirm this issue with my 64 core epyc processor. I had to revert back because disabling SMT made everything much slower, including the webgui.
-
FYI - for a while now, I've been getting much higher skipped SPs than normal (>20%). Did a bit of research and debugging the logs and found that modifying [machinaris.Config]/mainnet/config/config.yaml file fixed the issue. target_peer_count needs to be changed from 80 to 20 and the errors I was seeing seemed to have disappeared and the skipped rates is <1%.
-
CPU usage is going to depend on the age and power of your CPU but the RAM usage is expected. Apparently reducing the Node targets would help with the memory usage but I don't know if Machinaris supports that. There's lots of online material talking about how much memory Chia farming consumes - it needs to load the database in memory and is not a unique problem to just Machinaris. FWIW, I have a 64 core CPU @ 2400mhz running Chia + 10 forks tagged to just 8 cores in the docker and I never see the CPUs spike to 100%. Usually 0-60%. I'm guessing you have an old CPU?
-
If anyone was wondering if there was any power difference between splitting the different harvesters onto the same CPU cores, there isn't a lot. (<2%) Currently running (for testing) a 64 core AMD EPYC processor. I put all the dockers on 4 cores (and its corresponding hyperthread) and it had just a small decrease in power usage. (about 4 watts). Related, 4 cores/8 threads seems plenty for ALL the harvesters to run on.
-
Anybody else have their x265 transcoding suddenly stop working? I have an nVidia T600. Running the latest driver (530.41.03) with the BinHex container. Currently on the RC5 build of 6.12.0 which I think may be related since it seemed to coincide when I upgraded? What's strange is that my GPU dashboard tool shows that a PlexTranscoder tool running inside the docker is holding onto a GPU transcode instance when I try to transcode a 4K x265 file - but the amount of RAM being used is much less than normal when doing a 4K video. (300MB vs. 30MB) Here are the logs from Plex that show the error: May 12, 2023 12:03:43.404 [22944562506552] ERROR - [Req#8228/Transcode] [FFMPEG] - Failed to initialise VAAPI connection: -1 (unknown libva error). May 12, 2023 12:03:43.404 [22944562506552] DEBUG - [Req#8228/Transcode] Codecs: hardware transcoding: opening hw device failed - probably not supported by this system, error: I/O error log.txt
-
I've been blindly installing the TEST build for all the different dockers but I don't know which of the dockers need to have their database upgraded to save disk space. A cool new feature might be a tool that at least identifies which of the dockers need to be upgraded with the new datastore. v2 could be the actual upgrading process.
-
For anyone having RAM capacity issues, I ended up getting a second gen AMD Epyc that supports 8x 32GB sticks. Got it from China (ebay) and am quite pleased. Almost 3x more points in CPU Benchmarks from my previous system. Power consumption from a 8th gen I7 8700 to this is only about 80 more watts to operate. Not bad for lots of PCI-E channels, more RAM slots, and 3x more cores. I also live in an area where power is cheaper (0.13 USD/kwhr) Only downside is if you use your system for Plex. You have to figure out the optimal hardware transcoding + power consumption since you lose the Intel iGPU transcoding beast. I don't like how much software transcoding uses the CPU.