jaj08

Members
  • Posts

    93
  • Joined

  • Last visited

Everything posted by jaj08

  1. Grrrr not shocked that their was a clear cause to my issues, I have been trying to do this as cheap as possible being Christmas time and this upgrade wasn't really planned, I just got lucky with a new work laptop replacing my personal desktop giving me the ability to upgrade my 2014 server grade build I had been using.
  2. So I think I made the mistake of purchasing this card https://www.amazon.com/dp/B09F2CQL58?ref=ppx_yo2ov_dt_b_product_details&th=1 All my drives come up ok, but when doing a parity check I was averaging in the 20's. I then installed Diskspeed, and while the individual benchmark results for each drive look good, the test it does while also reading all drives at once clearly cuts performance down dramatically. I knew it was a gamble with so many ports on a single cheap card, but I am trying to keep to a a tight budget as I am upgrading my server with what was originally my desktop hardware. So the Mobo isn't the typical server/high density port counts I had previously. I had a couple old LSI cards laying around that all work, but then I loose out on my video card/transcoding because I am reliant on 2x8 port cards. So my question is, whats the most budget friendly option to get at least 16 ports on a single card. I know I will end up disappointed but I am still very tempted to simply try out the x16 card from this same seller vs the x4 card I attempted to get by with. I only bought the x4 card because I was trying to give my video card the primary x16 slot, but I suspect the video card could get by on the other port, or worse case, I'll just use 2x8port cards and go to CPU transcoding on the Ryzen 3600X processors that came along with this hardware upgrade. Any budget friendly ideas for a 16 port card? Or can anyone give me a confidence boost that I may get lucky with this x16 card. https://www.amazon.com/dp/B09K5GLJ8D?ref=ppx_yo2ov_dt_b_product_details&th=1
  3. Skipped over this portion of the reply this morning. I would say I'm doing Chia the way the developers said to, I am utilizing spare resources I have laying around the office. With that, installing a small/not obtrusive harvester exe seems harmless and low key. But installing docker service on each server starts to get a little more intrusive on office utilized equipment. Several of my harvesters are also 2012r2 servers which I believe by default do not support Dockers. I do have a couple backup only 2016 servers maybe I will see if I can expand an instance or two to those.
  4. Actually both are full nodes, but I assume the bug is the same as far as how the workers are communicating.
  5. Loving the remote workers to consolidate everything to a single interface. Wish Machinaris could be a stand alone install for all my windows remote harvesters I have running. With that, I want to report my estimated time to win has the wrong info. If I compare to Chia calculator it seems like the expected time to win is using a remote worker instance to calculate my time to win, and not combining the estimate with the primary instance. This setup has the primary +1 remote full node. 639 total plots, chia calc 4 months 166 on the remote worker, chia calc about 1 year 473 on the primary instance, chia calc 5 months Machinaris expected time to win: 1 Year
  6. Thought I report a little anomaly I just experienced, when I updated the latest docker, on startup, my databases showed not in sync anymore, like back to March when Mainnet launched date. Looking at the file sizes of the DB files they were still 4gb, so the data wasn't lost, but for some reason the docker went back in time and was trying to sync from scratch. I restarted the docker same results, luckily I manage multiple docker instances so I copied the db folder from my other instance over and got things happy again. My other 4 instances all upgraded without issues.
  7. So I see Chiadog's official recommendation for monitoring multiple harvesters is to spin up multiple Chiadog instances, similar to what you suggested with multiple dockers. https://github.com/martomi/chiadog/wiki/Monitoring-Multiple-Harvesters Multiple machinaris dockers though spins up by default a lot more services than necessary unless of course someone simply comes up with a Chiadog dedicated docker for unraid. So I am wondering, any chance you can add support and a variable for multiple chiadog instances? Perhaps a variable where we can specify how many instances we require. And then from your primary Interface you would then have Alert1 Alert2 Alert3 sections in the GUI with different configuration files for each instance number. Ideally Alert# can be named something custom so we can label each server being monitored. My home setup works perfect with machinaris, but my office setup has a total of 8 or so remote harvesters split between two locations.
  8. Is there anyway for the main summary page to show the hash challenges for harvesters connecting to the node like the normal GUI does. As I glance through the log I am guessing the answer is no, but it would be awesome if showed the plots passed status of the connecting harvesters. I think I will dabble with having multiple dockers running so that Chia dog can point at each of my harvesters for reporting. That plus pushover notifications or something should help things out. Love the new release! Update: I have 2 dockers running, with the log file overwriting from a remote harvester, but at a glance it looks like this band-aid doesn't work for Chiadog. I don't think Chia dog is expecting to see the log of a harvester as it doesn't pull out the number of plots processing time info like it does on a Full node log. Seems like the best solution for monitoring remote harvester logs will be to get the Chiadog developer to add this officially as a feature.
  9. As I eagerly await the next release to monitor logs, I was wondering if from the Docker environment I will be able to monitor logs located on a Windows system? Really hoping to have a single interface/location to keep an eye on everything. As I hit the 1400 plot mark with 0XCH it is frustrating so I keep wasting time auditing log files and such to ensure everything seems happy.
  10. Just to update, but fixing time zone ended up not fixing my syncing issues. In a way that makes sense, as time was proper, just wrong Timezone, and why would chia be broken in specific time zones? Any way, for now I was forced to migrate my full node back over to my Windows Desktop where I have been synced all day without issues. For now the docker is strictly a harvester for me.
  11. Woohoo, apt-get install systemd and following the setup worked for me. I restarted the docker for good measure, made sure time zone was right, and almost immediately went back to showing a synced status. I had things drop out of sync the other day but simply restarting the docker seemed to fix that problem. Hopefully now I can stop fiddling with the docker so I can earn some chia... I currently have 124 @home, and 632 at the office, but still a member of the 0XCH club.
  12. Since migrating to my docker being the full node I have had random times where I drops back into syncing mode. Now that I read your comment I have also noticed the Timezone is clearly wrong in my docker as well. So what's the fix?
  13. So in the above case I am then having the service run directly on unRAID? Seems like ideally would love to see this moved over to a docker setup now that the tunnel service is free.
  14. Any reason why my docker keeps telling me an update is available, appears to apply the update, and then goes back to update available? Overview of what I see when applying the update, looks like everything seems to go ok TOTAL DATA PULLED: 251 MB Successfully stopped container 'binhex-plexpass' Successfully removed container 'binhex-plexpass' The command finished successfully!
  15. Am I overlooking a setting somewhere, or is there no setting to adjust how frequently the pings are done? Would love to increase the frequency to check smaller bits of packet loss and latency.
  16. Did ICMP ever issue ever get resolved? I am seeing the Not Permitted message and my install defaulted to the --user 99:100 advanced config.
  17. jaj08

    Jitsi?

    +1 Something I never really considered until we were all stuck in quarantine, but I can see where this would be useful vs Zoom's 40 minute limitation on free accounts.
  18. I just replaced my UPS and I am running into the problem of not getting all the stats. I followed the instructions to enable modbus, but within NUT, that is not an option to use that as a device type. Has anyone got this working with NUT? I have 2 unraid servers so I need NUT so the 2 servers can talk with the one UPS.
  19. Same here, just updated today and now my Plexpass version is broke with same message. My non plexpass deployment shows fully up to date and is not failing.
  20. Anyone ever have this seem to eat its configuration? At first I was theorizing that the problem happened during my nightly appdata backup, but I had it happen again with no backup scheduled. What is happening is I am randomly going to login to the web interface, and it starts acting like its a new configuration and starts prompting to name my server and such. If I go and recover my appdata folder from a few days previous I can get things back up and running and the syncs start again. This has happened multiple times, on both of my unraid servers that I am syncing between.
  21. Yeah I update the containers whenever prompted. I manage a total of 4 unRAID + Crashplan environments. Looks like 3 of the 4 are all stuck on 4.8.0
  22. I just noticed I am running 4.8.0 as well... I saw where my environment auto upgraded in july to 4.8.3... But for some reason my environment now shows that its running 4.8.0. No clue when or what would have caused the downgrade. No new errors, or auto upgrade attempts have been made since.
  23. I'm sure it won't change a thing, but you never know if enough people provide feedback so I emailed Crashplan some feedback. Specifically I asked for them to consider a Home Lite solution that allows for peer-to-peer backups. If I could keep the peer-to-peer feature I would be much happier as I took advantage of the feature a lot and loved the encryption and such used so i could trust storing the data on servers I didn't directly manage. I was actually in progress of building a large unraid server that I was going to store at my parents house and Crashplan was going to be big part in my plans to duplicate/backup data from my unraid server. Worst part is my service had just renewed, I really want to wash my hands of this right away at this point now that I know it can not be my long term solution.
  24. Look at "Additional settings" - "Password policy" and uncheck the enforcement options. Wow, a quick and easy fix. I swear I had tried multiple passwords, but I guess Nextcloud not giving a message as to why it was failing led me down the path of thinking I was experiencing the problem others had reported. Thanks
  25. Unable to create new users, found a bug post that seems similar, https://github.com/nextcloud/server/issues/2734 But no clues on how to fix, anyone else ran into this? I saw several people report that time discrepancies can be the cause. I glanced around and while the Nextcloud docker is running GMT, and everything else CST, the actual time is correct for the time zone. Could the GMT/CST be at play? Or is this just some sort of other bug going on?