sworcery

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by sworcery

  1. Is there a way to export a world out to a .mcworld or even zip format? I want to delete the chunks with MCC ToolChest that aren't in use in order to get the newer stuff from caves and cliffs and such, but this world is from 1.17.
  2. Could it be that having my P2200 in a higher PCI-e slot that it is overriding the GPU variable? I have the RTX 2080 on my bottom slot because otherwise I would blocking a needed PCI-e slot. I could get a riser cable and try placing the 2080 in a higher slot. The GPU variable was working before until a recent update however. The most recent one I have a record of was on May 15 and the GPU variable was working before a reboot about 5 days ago and updates are done nightly.
  3. I haven't been able to play for a few days now. kept getting the out of date client message. I had to add :1.18.33.02-01 to repo field and I'm back up and running. it had me on 1.19 even thought I don't have anything marked for beta.
  4. I've also tried wiping on the config file that gets created, and both with or without the NVIDIA_VISIBLE_DEVICES variable defined, I end up with the same issues.
  5. I'm having an issue. I've defined the NVIDIA_VISIBLE_DEVICES variable and targeted only one of the 2 GPU I have in my system, a Quadro P2200 and an RTX 2080. I would like to only target the RTX 2080 for mining, however even after defining the GPU to target in the docker variables, both the P2200 and RTX 2080 show. When I try to disable the P2200 in Trex settings, I received an error message of: Edit config error (number of specified LHR tuning values in config doesn't match the number of GPUs) This error happens with or without the NVIDIA_VISIBLE_DEVICES defined in the docker settings. I've thought about manually pausing the P2200 but then I run the same error above when I try to limit power on the 2080.
  6. ok great, thanks. This just finished for me, v1 db was 74GB, v2 db is 39GB. are we safe to delete v1 db at this point? I can see the sqlite-shm and sqlite-wal are for v2 after a container restart.
  7. @guy.davis do we need to run this db upgrade command on all forks or just chia?
  8. iirc @guy.davis said on discord that flora isn't ready just yet, he's working on fixes on the :develop branch.
  9. just wanted to make sure this was correct. I have a standard wallet and 2 pooling wallets. My pool I joined is on wallet ID 2, and it appears to be functioning properly. Wasn't sure if wallet ID 3 should be any concern. I had joined and left another pool previously before figuring out how to properly setup my pool and plotting. Only wallet ID 2 and 3 show up on my pooling page. I'm seeing frequent partials as well on my pool.
  10. did you list it in the config within the container too? there is a section in the farming and plotting config that you can add extras. I have for example: - /plots - /plots2
  11. I'm using the latest test build, still experiencing the same as last night. Anything I can provide to help you pinpoint the causes?
  12. Flax netspace is showing up as ? and I'm currently having issues connecting to the chia network. This was working prior to updating to the :test branch. Also, Flax appears to be not seeing half my plots that are on a shared network drive, but chia does? edit: I think this may have something to do with timelord. root@InnisShallows:/chia-blockchain# chia start all chia_harvester: Already running, use `-r` to restart chia_timelord_launcher: started chia_timelord: Already running, use `-r` to restart chia_farmer: Already running, use `-r` to restart chia_full_node: Already running, use `-r` to restart chia_wallet: Already running, use `-r` to restart root@InnisShallows:/chia-blockchain# chia_timelord_launcher -r Traceback (most recent call last): File "/chia-blockchain/venv/bin/chia_timelord_launcher", line 11, in <module> load_entry_point('chia-blockchain==1.1.7.dev79+gaba34c1c.d20210629', 'console_scripts', 'chia_timelord_launcher')() File "/chia-blockchain/venv/lib/python3.9/site-packages/chia_blockchain-1.1.7.dev79+gaba34c1c.d20210629-py3.9.egg/chia/timelord/timelord_launcher.py", line 107, in main loop.run_until_complete(spawn_all_processes(config, net_config)) File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete return future.result() File "/chia-blockchain/venv/lib/python3.9/site-packages/chia_blockchain-1.1.7.dev79+gaba34c1c.d20210629-py3.9.egg/chia/timelord/timelord_launcher.py", line 85, in spawn_all_processes await asyncio.gather(*awaitables) File "/chia-blockchain/venv/lib/python3.9/site-packages/chia_blockchain-1.1.7.dev79+gaba34c1c.d20210629-py3.9.egg/chia/timelord/timelord_launcher.py", line 45, in spawn_process path_to_vdf_client = find_vdf_client() File "/chia-blockchain/venv/lib/python3.9/site-packages/chia_blockchain-1.1.7.dev79+gaba34c1c.d20210629-py3.9.egg/chia/timelord/timelord_launcher.py", line 39, in find_vdf_client raise FileNotFoundError("can't find vdf_client binary") FileNotFoundError: can't find vdf_client binary
  13. I like that in only my first week of using it I was able to spin up a 40TB array with 2x4TB parity drives and a 250GB cache drive in a matter of hours. Then when my Unifi NVR box died, I was able to bring up a replacement, allocate storage for it, and get it up and running with all my cameras connected and recording in less than 30 minutes. If I want to add more storage, it just takes a few clicks to allocate more to the share. I've done freenas and other drive pool systems, but how everything works so easily takes a lot of headache out of setting up this host. All of these things made it really easy for me to pull the trigger on the unlimited license. I'd like to see more notification agents going forward such as Pushover and Telegram bot integration as well as official docker support, the community driven one is good, but it always gives some piece of mind when it's from the official developers.