Gnomuz

Members
  • Content Count

    119
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Gnomuz

  1. I just updated to 2021.06.07.1554. Everything seems fine, but I get the following error message in the dropdown :
  2. I don't see any solution to change dynamically the final destination directory for a running plotting process. But if the final copy fails, due to destination drive full for instance, the process doesn't delete the final file in your temp directory, so you can just copy it manually to a destination with free space, and delete it from your temp directory once done.
  3. I can't see then, but it's strange that your prompt is "#", like in unRAID terminal, where mine is "root@57ad7c0830ad:/chia-blockchain#" in the container console as you can see. Sorry, I can't help further, it must be something obvious, but it's a bit late here !
  4. You have to be in the chia-blockchain directory, which happens to be a root directory in this container, for all that to work : cd /chia-blockchain source ./activate Here's the result in my container ; root@57ad7c0830ad:/chia-blockchain# cd /chia-blockchain root@57ad7c0830ad:/chia-blockchain# source ./activate (venv) root@57ad7c0830ad:/chia-blockchain# chia version 1.1.7.dev0 (venv) root@57ad7c0830ad:/chia-blockchain# Hey, btw, these commands must be entered in the container console, NOT in the unRAID terminal !
  5. You tried ALMOST everything ! The rigth command is : ".(dash) (space).(dash)/activate", you can also type "source ./activate", it's the same.
  6. In fact it's a general issue of crappy versioning and/or repo management. On a baremetal Ubuntu, when trying to upgrade from 1.1.5 to 1.1.6 following the official wiki how-to it also built 1.1.7.dev0. And deleting the .json file is of no effect for building the main stuff, maybe the Linux GUI, I don't use it. And I think the docker image build process simply suffers from the same problem. The only solution so far to build a "real" (?) 1.1.6 in Ubuntu is to delete the ~/chia-blockchain directory and make a fresh install. No a problem, as all static data are stored in ~/.chia/mainnet, but s
  7. Beware, the build process by chia dev team definitely needs improvement : I pulled both the (new) '1.1.6' tag as well as 'latest' (which is the default if you don't explicitly mention it). In both cases, the output of 'chia version' in the container console is 1.1.7.dev0 ! Check your own containers, it doesn't seem to raise issues for the moment, but when the pool protocol and the corresponding chia version will be launched, it may lead to disasters, especially in setups where there are several machines (full node, wallet, harvester(s), ...) I had already created an issue in github,
  8. Hi, I used Host network type, as I didn't see any advantage in having a specific private IP address for the container. So far, the X470D4U2-2T matches my needs. Let's say the BIOS support from Asrock Rack could be improved ... Support for Ryzen 5000 has been in beta for 3 months, and no sign of life for a stable release. As for expansion, well, mATX is not the ideal form factor. If ever I were to add a second HBA, I would have to sacrifice the GPU, so no more hardware decoding for Plex. But it's more a form factor issue than something specific to the MB. In terms of reliability,
  9. No problem, PM me if you find issues in setting the architecture up. For the Pi installation, I chose Ubuntu Server 20.04.2 LTS. You may chose Raspbian also, that's your choice. If you go for Ubuntu, this tutorial is just fine : https://jamesachambers.com/raspberry-pi-4-ubuntu-20-04-usb-mass-storage-boot-guide/ . Note that apart from the enclosure, it's highly recommended to buy the matching power supply by Argon with 3.5A output. The default RPi4 PSU is 3.1A, which is fine with the OS on a SD card, but adding a SATA M.2 SSD draws more current, and 3.5A avoids any risk of instab
  10. Btw, the pull request is mine 😂. You could approve it, maybe it will draw attention from the repo maintainer ...
  11. If you ever want to create something else than k-32 , the CLI tool also accepts the '-k 33' parameter for instance. If '-k' is not specified, it defaults to '-k 32' which is the current minimal plot size. All options are documented here https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference , and 'chia plots create -h' is a good reference also 😉
  12. Hi, Thanks for your interest in my post, I'll try to answer your numerous questions, but I think you already got the general idea of the architecture pretty well 😉 - The RPi doesn't have a M.2 slot out of the box. By default, the OS runs from a SD card. The drawback is that SD cards don't last so long when constantly written, e.g. by the OS and the chia processes writing logs or populating the chain database, they were not designed for this kind of purpose. And you certainly don't want to lose your chia full node because of a failing 10$ SD card, as the patiently created plots are st
  13. Without port 8444 forwarded to the full node, you'll have sync issues, as you will only have access to the peers which have their 8444 port accessible, and they are obviously a minority in the network.... Generally speaking, opening a port to a container is not such a big security risk, as any malicious code will only have access to the resources available to the container itself. But in the case of chia, the container must have access to your stored plots to harvest them. An attacker may thus be able to delete or alter your patiently created plots from within the container, which would b
  14. My understanding of the official wiki is that your system clock must not be off by more than 5 minutes, whatever your timezone is, reminded that the clocks (both software and hardware) are managed as UTC by default in Linux. Anyway, I see that many have problems syncing when running the container as a full node. Personally I went the Raspberry way and everything is working like a charm so far. The RPi4 4GB boots from an old 120GB SATA M.2 SSD under Ubuntu 20.04.2 (in an Argon One M.2 case which is great btw) and runs the full node. In a first time, I ran the harvester on the RPi and
  15. I raised that issue a few posts above, but the problem is tzdata is not included in the official docker, which is out of reach for the author of this template ... So, the TZ variable is just ignored. I only run a harvester on the docker, and it communicates properly with my full node/famer, despite the container is in UTC time and the full node in "Europe/Paris", 2 hours behind just like you. There's a manual workaround to have the correct time, but you have to redo each time you restart the container : - entrer the container console - stop all chia processes : 'chia stop -d far
  16. Hi, Review the link I gave in my OP (https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines) to understand the underlying objectives. Step by step : - copy the mainnet/config/ssl/ca directory of your full node in a path accessible from the container, e.g. appdata/chia - edit the container settings : farmer_address : IP address of your farmer/full node machine farmer_port : 8447 harvester_only : true - start the container - enter the container console - enter "top", you should have only two chia processes : "c
  17. Hi, Thanks for the template, I was waiting for it because my full node is on a Raspberry Pi4 and everything works fine, but harvesting from the Pi on an Unraid Samba share was just awful from a performance standpoint, especially while a new plot was being copied to the array (I saw response times beyond 30 seconds!). Now I run only a harvester in the container, and response times are more in the milliseconds range ! By tweaking config.yaml it's easy to connect the container harvester to the full node, after having generated keys for the harvester with chia init -c /path/to/ful
  18. Hi, I have exactly the same issue as @kyis and will try to elaborate a bit further. I live in a area in France where ADSL connections are awfully slow (5Mbps DL / 0.7Mbps UL). So I have a 4G router (in bridge mode) with an external antenna as the main internet connection with very decent speeds (150Mbps DL/45Mbps UP), the ADSL ISP box is only used as a failover. An internal router manages the failover and routing for all internal devices. The issue is that port forwarding is not available on the 4G connection, as often. The workaround I've found is to have a Wireguard tunnel over the 4G
  19. @i-B4se, according to http://apcupsd.org/manual/manual.html#connecting-apcupsd-to-a-snmp-ups , you may try to populate the "Device" setting with "IPAddr:161:APC", in case the autodetection of the "vendor" qualifier fails with our UPS. Another option seems to switch from SNMP to PCNET protocol, see http://apcupsd.org/manual/manual.html#powerchute-network-shutdown-driver-pcnet , if your UPS/Network management card support it. But it's pure guess, I have no practical experience with Network Card / SNMP APC UPSes. Good luck, from my painful personal experience, getting apcupsd to w
  20. Thanks @ich777, quick and efficient, as usual ! I have a strong preference in Production branch rather than the New Features one on my server ...
  21. I agree the USB-RS232 adapters with genuine FTDI chips are fine, and this product seems to be serious. The problem is it costs around 50€ here in France, so the add-on card would be cheaper for @tetrapod if he has a free Pcie slot.
  22. If you have a free pcie 1x slot on your motherboard, that will do the trick. The other route would be a USB to rs232 adapter, but there are many compatibility issues as most of these so-called FTDI adapters are Chinese cheap crap. So stick with the add-on card if you can.
  23. Hi @ich777, I've seen the latest New Feature Branch driver 465.24.02 is now available in the settings page. Could you also make the latest Production Branch driver 460.73.01 available please ? The download page for this one also indicates "File Size: Temporarily unavailable", but the download link is fine, I just downloaded and installed it on another machine. Thanks for the great job, as usual 😉
  24. Thanks for praising my perseverance, I hate giving up, maybe a kind of OCD, but it's sometimes useful The "magic recipe" for getting correct numbers was only a poor workaround for me, as t didn't survive a reboot as you mentioned. The issue we face is a apcupsd bug, reported on the apcupsd mailing list on various OSes. But if you read my recent posts since late March 2021, you'll see I've now implemented a working solution which consists in replacing the USB connection with a good old RS232 serial connection between the UPS and the server ("Smart" cable ref AP940-0625A
  25. No spin down issue here for 6.9.2, working as expected, six SATA HDDs with LSI HBA