Gnomuz

Members
  • Posts

    130
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Gnomuz

  1. I raised that issue a few posts above, but the problem is tzdata is not included in the official docker, which is out of reach for the author of this template ... So, the TZ variable is just ignored. I only run a harvester on the docker, and it communicates properly with my full node/famer, despite the container is in UTC time and the full node in "Europe/Paris", 2 hours behind just like you. There's a manual workaround to have the correct time, but you have to redo each time you restart the container : - entrer the container console - stop all chia processes : 'chia stop -d farmer' - install tzdata : 'apt-get update' and 'apt-get install tzdata', there you can set the timezone - restart chia : 'chia start farmer' * change 'farmer' in the commands above according to the set of process you run in the container. For me, it's 'harvester' to run 'chia_daemon' and 'chia_harvester' only. 'farmer' is fine for a full node, farmer, harvester and wallet. Hope that can help, although I doubt your sync issues are related with timezone. I'm almost sure everything is managed as UTC time at network level, and the timezone is only used to display timestamps in logs. If you don't open and forward the port 8444 to the container (in practice to the IP address of your Unraid server) on your router, you will have difficulties to connect to peers on the network, and thus the sync will take ages. It's exactly the same as in Bittorrent and many other P2P protocols. The oldest may remember the "Low Id" in emule 😉
  2. Hi, Review the link I gave in my OP (https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines) to understand the underlying objectives. Step by step : - copy the mainnet/config/ssl/ca directory of your full node in a path accessible from the container, e.g. appdata/chia - edit the container settings : farmer_address : IP address of your farmer/full node machine farmer_port : 8447 harvester_only : true - start the container - enter the container console - enter "top", you should have only two chia processes : "chia_daemon" and "chia_harvester" - enter the venv : ". ./activate" - stop the running processes : "chia stop -d all" - check the processes stopped properly with "top" - create the keys for the container harvester signed by the farmer node : "chia init -c /root/.chia/ca" (if you copied the famer's ca directory as suggested above, adjust if needed) - restart the harvester : "chia start harvester" - check both logs (container and farmer nodes), you will see successful connections between both nodes if your logs are set at INFO level - if everything is fine, delete the adddata/chia/ca directory so that you don't have multiple copies of your keys around your network as a safety measure
  3. Hi, Thanks for the template, I was waiting for it because my full node is on a Raspberry Pi4 and everything works fine, but harvesting from the Pi on an Unraid Samba share was just awful from a performance standpoint, especially while a new plot was being copied to the array (I saw response times beyond 30 seconds!). Now I run only a harvester in the container, and response times are more in the milliseconds range ! By tweaking config.yaml it's easy to connect the container harvester to the full node, after having generated keys for the harvester with chia init -c /path/to/full/node/certs (see https://github.com/Chia-Network/chia-blockchain/wiki/Farming-on-many-machines for more details). I just have a silly issue, the container seems to default to a generic "Europe" timezone, and timestamps are off by two hours with my local time in France. I added a "TZ" variable set to "Europe/Paris" to the template, but that changed nothing. I installed tzdata inside the container and dkpg-reconfigure tzdata to Europe/Paris, it's fine now, but obviously it's not a persistent solution. I would really prefer to have consistent timestamps in logs between the various machines (harvester container, full node, plotting machines). Thanks in advance for any guidance. PS : of course, my Unraid host is in my timezone
  4. Hi, I have exactly the same issue as @kyis and will try to elaborate a bit further. I live in a area in France where ADSL connections are awfully slow (5Mbps DL / 0.7Mbps UL). So I have a 4G router (in bridge mode) with an external antenna as the main internet connection with very decent speeds (150Mbps DL/45Mbps UP), the ADSL ISP box is only used as a failover. An internal router manages the failover and routing for all internal devices. The issue is that port forwarding is not available on the 4G connection, as often. The workaround I've found is to have a Wireguard tunnel over the 4G connection with a VPN provider which accepts port forwarding on a limited number of ports. That works great and I can have access to my LAN from outside without any problem, except for the Remote Access by Unraid. I understand you get the IP used to contact your mothership server, in may case the public 4G address, which is not accessible from outside. Of course I could, as you suggest, implement policy-based-routing rules to force route the outgoing traffic from my Unraid server : - through the ADSL connection, but then the speeds would be awful, and the remote access would be practically unusable, although available - through the Wireguard tunnel, but then I would route all the traffic of the Unraid server through the VPN provider, and most of it (e.g. docker and plugin updates) doesn't really "deserve" it. The best solution, imho, would be to have a setting in the plugin to indicate a fully qualified name (or a fixed public IP) for the remote access entry point, in specific cases like @kyis or me. Otherwise, the workaround would be to get a detailed view on the exchanges between the Unraid server and your server(s) so that I can try and implement a reasonable routing policy, limiting the use of the WG tunnel to the communication between the plugin and your server(s). Thanks in advance for your attention.
  5. @i-B4se, according to http://apcupsd.org/manual/manual.html#connecting-apcupsd-to-a-snmp-ups , you may try to populate the "Device" setting with "IPAddr:161:APC", in case the autodetection of the "vendor" qualifier fails with our UPS. Another option seems to switch from SNMP to PCNET protocol, see http://apcupsd.org/manual/manual.html#powerchute-network-shutdown-driver-pcnet , if your UPS/Network management card support it. But it's pure guess, I have no practical experience with Network Card / SNMP APC UPSes. Good luck, from my painful personal experience, getting apcupsd to work properly with an APC UPS with USB connection can be a long story ...
  6. Thanks @ich777, quick and efficient, as usual ! I have a strong preference in Production branch rather than the New Features one on my server ...
  7. I agree the USB-RS232 adapters with genuine FTDI chips are fine, and this product seems to be serious. The problem is it costs around 50€ here in France, so the add-on card would be cheaper for @tetrapod if he has a free Pcie slot.
  8. If you have a free pcie 1x slot on your motherboard, that will do the trick. The other route would be a USB to rs232 adapter, but there are many compatibility issues as most of these so-called FTDI adapters are Chinese cheap crap. So stick with the add-on card if you can.
  9. Hi @ich777, I've seen the latest New Feature Branch driver 465.24.02 is now available in the settings page. Could you also make the latest Production Branch driver 460.73.01 available please ? The download page for this one also indicates "File Size: Temporarily unavailable", but the download link is fine, I just downloaded and installed it on another machine. Thanks for the great job, as usual 😉
  10. Thanks for praising my perseverance, I hate giving up, maybe a kind of OCD, but it's sometimes useful The "magic recipe" for getting correct numbers was only a poor workaround for me, as t didn't survive a reboot as you mentioned. The issue we face is a apcupsd bug, reported on the apcupsd mailing list on various OSes. But if you read my recent posts since late March 2021, you'll see I've now implemented a working solution which consists in replacing the USB connection with a good old RS232 serial connection between the UPS and the server ("Smart" cable ref AP940-0625A). If you have a DB9 serial port available on your server and you're ready to invest 30€ in the cable (amazon Europe price), I would definitely recommend to go this way and forget about it forever !
  11. No spin down issue here for 6.9.2, working as expected, six SATA HDDs with LSI HBA
  12. @limetech, @jonp Maybe this thread should be moved by a forum admin in the "General support" section. I initially posted it as a prerelease bug report as I discovered the issue when I installed the UPS and my server was on 6.9.0 beta30, but it's obviously Unraid version agnostic. Moreover, it's very likely an apcupsd package and/or UPS firmware issue with USB connection, as others report the same under various distros in the apcupsd general mailing list. Despite, as it is the built-in Unraid solution to communicate with UPSes, I think it would make sense to have this tested workaround visible for those who'd encounter the same problems but will very likely not find this thread buried in the prerelease bug reports section... Thanks in advance for considering this move for the community.
  13. You're welcome, I've been fiddling with this for so long that I'm glad to share and help others get rid of this "ritual", as you say, that I very regularly forgot when rebooting ... 😉
  14. Changed Status to Solved Changed Priority to Annoyance
  15. Next and hopefully last episodes : 1) Reboot I've rebooted the server, and no issue after reboot, the daemon restarts properly and immediately gets good values from the UPS 2) Full power outage simulation - I set "Battery level to initiate shutdown (%)" to 85% to avoid discharging the UPS battery too much - I've unplugged the UPS to simulate a power outage. - Unraid server orderly shutdown was initiated @ 85% - 3 minutes later (default UPS grace period of 180 seconds), the UPS turned off as expected - when plugging the UPS back to mains, it started properly - the Unraid server booted properly, without parity check - UPS values read by apcupsd are correct - "Battery level to initiate shutdown (%)" set back to 60% in my case for normal operation - the UPS battery is slowly recharging (94% after 2 hours) So, everything is working as expected now, Modbus over serial connection with the so-called "smart" serial cable (ref AP940-0625A) was definitely the way to go rather than the USB-A to USB-A supplied cable in my particular case. Don't forget to activate the Modbus protocol on the UPS from the UPS LCD panel in configuration mode (see manual). According to the manual, Modbus should be enabled by default, this was not the case for my UPS. You must have CP.1 displayed, not CP.0, under configuration mode. The only limit of this solution is you must have a DB9 serial port on your motherboard. This is generally the case on server MBs, but a serial port is rarely present on modern workstations or desktop MBs. I've read on the apcupsd support list that this solution should work with a USB to serial adapter, but you may have to try various adpaters, avoiding cheap chinese crap. And the name of the tty device in the daemon setup should be something like ttyUSB0 or ttyACM0, depending on the chip in the adapter. I haven't digged further as I have an unused native serial port on my MB. If others want to experiment, I'll pass the torch to them now !
  16. I confirm I didn't configure the serial port for baud rate, stop bit, parity, ... For reference, current (default) serial port setup : stty -F /dev/ttyS0 -a speed 9600 baud; rows 0; columns 0; line = 16; intr = <undef>; quit = <undef>; erase = <undef>; kill = <undef>; eof = <undef>; eol = <undef>; eol2 = ); swtch = M-0; start = M-s; stop = f; susp = <undef>; rprnt = <undef>; werase = <undef>; lnext = M-h; discard = <undef>; min = 0; time = 0; -parenb -parodd -cmspar cs8 -hupcl -cstopb cread clocal -crtscts -ignbrk -brkint ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc -ixany -imaxbel -iutf8 -opost -olcuc -ocrnl -onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 -isig -icanon -iexten -echo -echoe -echok -echonl -noflsh -xcase -tostop -echoprt -echoctl -echoke -flusho -extproc So far, no data corruption here, as I monitor UPS data with telegraf/grafana, and any corrupted data would be clearly visible on graphs.
  17. Hi all, I just received the serial cable and have a first positive feedback. I stopped the apcupsd daemon, unplugged the USB cable, plugged the serial cable. Then I modified the UPS settings as per the apcupsd manual : After restarting the apcupsd daemon, and a few seconds for refreshing, the daemon obviously gets consistent data from the UPS : /dev/ttyS0 was obviously the unique DB9 serial port on my motherboard, and all values are correct. I have simulated a short power outage of 2 mins (without shutdown, batteries @ 90% and 18 mins runtime left), the daemon got the expected data and had the expected behavior : I still have to reboot the server to make sure everything starts over properly after a reboot (which was the problem with the USB cable), but I can't reboot right now, it's already a bit late here. If everything is OK, I'll simulate a power outage long enough to trigger the shutdown, and check what happens when mains returns. I hope I can test all that tomorrow morning and will report back.
  18. Same typo here, and these footer temperatures come from the Dynamix System Temperature plugin. Support thread for this plugin is http://lime-technology.com/forum/index.php?topic=36543.0
  19. Thanks for replying, that's reassuring to see I'm not alone -at all- with this long lasting issue, as we have exactly the same UPS, except for the form factor (rack mount for you, tower for me). I should receive the serial cable on Tuesday, and I'll share my findings, of course. I read apcupsd manual again, and the setup should be rather straightforward according to http://apcupsd.org/manual/manual.html#modbus-driver . As i'm not a Linux expert, and haven't played with serial ports for years if not decades, I just wonder how to find the device name. I have an existing /dev/ttyS0 entry, I suppose it is the DB9 serial port of my motherboard, but not 100% sure. I'll keep you informed anyway.
  20. Hello all, I'm back on this persistent and expasperating issue, which obliges me to go back and forth from my basement to my office whenever I reboot my Unraid server ... I've just upgraded the server to 6.9.1 and of course nothing has changed. I do agree with @Dirthat Unraid as the base distro is very likely not at stake, and that apcupsd somehow messes up with this particular UPS during the initialization/reset phase. Nothing suprising either, the latest release of apcupsd in 2016 (!!!) is 3 years older than this UPS launch ... So I went through the apcupsd mailing list as suggested by @jademonkee and obviously @Dir and I are not the only ones to meet this issue. From what I've seen, the generally agreed "solution" is to switch to the good old serial connection instead of USB, with a specific RJ45 to DB9 female cable (ref APC AP940-0625A, 2 meters long). I'm lucky enough to have a serial port on my motherboard, so i've just ordered the cable from amazon for roughly 30€ and should receive it next week. I understand the apcupsd settings should be as follows with this setup : UPS Cable : Custom Custom UPS Cable : 940-0625A UPS type : APCsmart or ModBus (depending on ModBus activation or not on the UPS ?) Device : /dev/tty** If anyone has already succeeded in running a similar setup, you're welcome to confirm or amend ! I'll report back once I've received the cable and tried this solution, which will remain a workaround for me, even if it happened to work in the end...
  21. Thanks for the reply, as usual 😉 Strange that I got such a slow DL speed from France (ISP Free). As I've said, I didn't have this issue when downloading v460.56 a few days ago. I understand your advice to avoid this possible problem during the reboot would be, when a new Nvidia linux driver is published, to : - download the latest driver package from the plugin page - set the driver version to this downloaded version rather than "latest" in the plugin page - reboot (without downloading, as the latest driver is already there) If that's correct, I'm fine with such a method, it gives more control on the driver version. Anyway Nvidia doesn't publish drivers every other day, and I'm not always willing to install them on day one ! Thanks again.
  22. Hi all, As a conclusion to this long-lasting thread, I've just updated to 6.9.1 and I succesfully managed to activate persistent syslog to an ad-hoc share by following @nblom's step-by-step. Without deleting the rsyslog.conf and .cfg files, and then rebooting, I still got errors and no persistent syslog.
  23. Hi all and @ich777, I upgraded a few days ago to 6.9.1 and also to the latest version of this great plugin without any issue. On first reboot it upgraded the nvidia drivers to 460.56, and the download at boot was reasonably fast afair. I'm currently rebooting the NAS for another reason (rsyslogd fix with this version), and as I've set the plugin to "latest" driver, it's currently downloading the 460.67 drivers (118 MB), which is the expected behavior. My issue is the download on boot is done at a ridiculously slow speed, around 100KB/s average, that is around 17 minutes for the full download ! As a consequence, the boot and outage time is significantly longer than expected. Just to understand, where is the package downloaded from ? I think @limetech, who are now supporting this plugin as part of the official Unraid setup, should provide a decent download repo with download speeds similar to what we get when updating the OS (AWS iirc). Thanks in advance for your thoughts and support.
  24. I've updated the container 30 minutes ago, the app is now 8.6.0.1059, thanks for the support @Djoss! It has started a full block synchronization, 3% so far for a 1.4TB dataset
  25. I have activated 2FA a few weeks ago, without any impact on the backup process in the container, and I've been under Unraid 6.9.0 beta35 for around 4 months, so no OS version change here. I just had a closer look at the backup history, it fails to update to the latest version, which is normal, and retries every hour, but backups are indeed working. The files modified today can be restored from the cloud, either from the container or the webadmin interface. So, I think the alert mail we have received with @TexasDave is more about the update fails than backup fails. As a reminder, the number of days after which you receive a warning email is a parameter which can be set in the webadmin interface. It's 2 days for me, but I think I have shortened it from 7 days to 2 days when I installed the container. That may explain why others haven't received -yet- these emails.