Einsteinjr

Members
  • Posts

    61
  • Joined

  • Last visited

Everything posted by Einsteinjr

  1. Is the assumption that this will be fixed in a new release?
  2. Can confirm this issue with my 64 core epyc processor. I had to revert back because disabling SMT made everything much slower, including the webgui.
  3. FYI - for a while now, I've been getting much higher skipped SPs than normal (>20%). Did a bit of research and debugging the logs and found that modifying [machinaris.Config]/mainnet/config/config.yaml file fixed the issue. target_peer_count needs to be changed from 80 to 20 and the errors I was seeing seemed to have disappeared and the skipped rates is <1%.
  4. CPU usage is going to depend on the age and power of your CPU but the RAM usage is expected. Apparently reducing the Node targets would help with the memory usage but I don't know if Machinaris supports that. There's lots of online material talking about how much memory Chia farming consumes - it needs to load the database in memory and is not a unique problem to just Machinaris. FWIW, I have a 64 core CPU @ 2400mhz running Chia + 10 forks tagged to just 8 cores in the docker and I never see the CPUs spike to 100%. Usually 0-60%. I'm guessing you have an old CPU?
  5. If anyone was wondering if there was any power difference between splitting the different harvesters onto the same CPU cores, there isn't a lot. (<2%) Currently running (for testing) a 64 core AMD EPYC processor. I put all the dockers on 4 cores (and its corresponding hyperthread) and it had just a small decrease in power usage. (about 4 watts). Related, 4 cores/8 threads seems plenty for ALL the harvesters to run on.
  6. Anybody else have their x265 transcoding suddenly stop working? I have an nVidia T600. Running the latest driver (530.41.03) with the BinHex container. Currently on the RC5 build of 6.12.0 which I think may be related since it seemed to coincide when I upgraded? What's strange is that my GPU dashboard tool shows that a PlexTranscoder tool running inside the docker is holding onto a GPU transcode instance when I try to transcode a 4K x265 file - but the amount of RAM being used is much less than normal when doing a 4K video. (300MB vs. 30MB) Here are the logs from Plex that show the error: May 12, 2023 12:03:43.404 [22944562506552] ERROR - [Req#8228/Transcode] [FFMPEG] - Failed to initialise VAAPI connection: -1 (unknown libva error). May 12, 2023 12:03:43.404 [22944562506552] DEBUG - [Req#8228/Transcode] Codecs: hardware transcoding: opening hw device failed - probably not supported by this system, error: I/O error log.txt
  7. Profit probably needs to be removed. I can't start a timelord server either. 2023-03-27T14:02:00.730 timelord profit.timelord.timelord : ERROR Not active for 60 seconds, restarting all chains 2023-03-27T14:02:00.730 timelord profit.timelord.timelord : INFO reset_chains new_unfinished_blocks: 0
  8. Is it just me or do at least 1 or 2 appear to me out of sync at any given time for anyone else? Currently Ecostake, Petroleum, Profit, and Tad are all out of sync. =/ I wonder if I'd have better luck if my ISP supported ipv6.
  9. I've done the same thing using an unassigned SSD drive. Just stop the containers, move the appdata folders over to the new location, update the appdata location in each container, and start containers back up. As long as you make the root path consistent, it's easy.
  10. Same Issue: 2022-11-09T09:54:22.051 farmer flax.farmer.farmer : WARNING No keys exist. Please run 'flax keys generate' or open the UI. I even deleted the entire app-data for this container and redownloaded the DB.
  11. I think the Flax test build is broken. When I start it, it says it cannot find the key and therefore sits in the "paused" state in the machinaris webgui. It just started happening since I updated to the latest test build.
  12. I've been blindly installing the TEST build for all the different dockers but I don't know which of the dockers need to have their database upgraded to save disk space. A cool new feature might be a tool that at least identifies which of the dockers need to be upgraded with the new datastore. v2 could be the actual upgrading process.
  13. For anyone having RAM capacity issues, I ended up getting a second gen AMD Epyc that supports 8x 32GB sticks. Got it from China (ebay) and am quite pleased. Almost 3x more points in CPU Benchmarks from my previous system. Power consumption from a 8th gen I7 8700 to this is only about 80 more watts to operate. Not bad for lots of PCI-E channels, more RAM slots, and 3x more cores. I also live in an area where power is cheaper (0.13 USD/kwhr) Only downside is if you use your system for Plex. You have to figure out the optimal hardware transcoding + power consumption since you lose the Intel iGPU transcoding beast. I don't like how much software transcoding uses the CPU.
  14. So...what are the chances that UnRAID will support these new iGPUs that likely will have decent video decoding and encoding? Could they compete with QuickSync? From what I can tell, AMD APU support in UnRAID was non-existent. Will that maybe change this new socket generation?
  15. I'm going to guess the system is offline considering the DNS introducer does not respond to ping requests.
  16. Has anyone had any success with syncing ecostake? I get this log output constantly with no sync progress. Is IPv6 (AAAA) a requirement? 2022-07-18T13:29:22.878 full_node ecostake.full_node.full_node: INFO Received 1 peers from DNS seeder, using rdtype = A. 2022-07-18T13:29:22.885 full_node ecostake.full_node.full_node: WARNING querying DNS introducer failed: The DNS response does not contain an answer to the question: dns-introducer.ecostake.online. IN AAAA 2022-07-18T13:30:40.869 full_node ecostake.full_node.full_node: INFO Received 1 peers from DNS seeder, using rdtype = A. 2022-07-18T13:30:40.874 full_node ecostake.full_node.full_node: WARNING querying DNS introducer failed: The DNS response does not contain an answer to the question: dns-introducer.ecostake.online. IN AAAA
  17. FYI - Profit and BPX are using a same network port - 8945. I tried changing BPX to use 8946, but it seems there are some hard-coded things that make that impossible to change easily.
  18. I also have this issue. It seems it started when I modified the subdomain list. I'm seeing similar reports on the LetsEncrypt community page. I had to change over to dns-plugin (Cloudflare) to get the validation working.
  19. Make sure that you are pinning all of your cpu cores to the container. It's under the advanced settings of the container.
  20. What percentage of skipped sp's do people get daily? Mine depends on the fork, but for xch it's around 0.03%. Some forks just stop responding for quite a few minutes based off the alerts I get. Guessing it's maintenance work?
  21. You're gonna need to provide more information for something like this. Are you confident that the chia fork has completed all the synching? It will eat up a lot of CPU as it is synching.