badi95

Members
  • Posts

    14
  • Joined

  • Last visited

badi95's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I'm in the process of switching over from using passthroughvpn container to using the "VPN tunnel access for docker" to connect to my VPS, so can expose certain containers to through nginx. I'm able to connect to the VPS and use containers that don't interface with other containers fine. I'm running in to issues when I try to use containers that need to talk to other containers on the bridge network, for example overseerr. I've tried adding the bridge network to the container along with with the wg0 network, but then the container is no longer accessible through the tunnel. Any help would be appreciated.
  2. I have a wireguard tunnel to my vps for certain docker containers using passthroughvpn container. Will I be able to replace this setup with the functionality described? or will it only support commercial VPN solutions?
  3. Ended up picking this up, https://www.amazon.com/gp/product/B0797CDK7J?tag=serverbuilds-20&geniuslink=true and getting 2x sata to molex cables, instead of 1 sata to 2 molex for the intel enclosure. Seems like a good deal for a good PSU. Should I just leave my server running with the disk emulated until I can get everything installed?
  4. I have virtually know idea what I'm doing when it comes to specing out a power supply. Is this a good option for my situation? https://www.amazon.com/gp/product/B0797CDK7J?tag=serverbuilds-20&geniuslink=true
  5. It's the one that came with my Dell T110ii back in 2012... Suggestions for what I should get?
  6. Thanks for the help! I'll search around for steps on how to do the rebuild. Do you think my 305w power supply is sufficient or does it need to be upgraded to handle all the disks I'm running on it?
  7. Those 2 disks are on the same half of the Intel AXX6DRV3GEXP, so I think they share the same power and HBA line. The connectors seemed to be good. The wiring to the Intel AXX6DRV3GEXP is pretty janky though. I have a SATA power splitter to a 1 to 2 sata to molex adapter, plugged into the back of the Intel AXX6DRV3GEXP.
  8. Came back from dropping kids off at school to find my array in bad shape. One disk disabled with 34 errors, another disk with 2048 errors, but still running. I've got Dell T110ii with a LSI SAS 9207-8i connected to 4 2.5" drives (2 SSD, 2 10k HDD) and an Intel AXX6DRV3GEXP with 4 3.5" sas drives, and 4 3.5" SATA drives connected to MOBO. Pre-cleared all the disks when I got them. Attached diagnostics. The system was in heavy use when the errors occurred (transferring TBs of data from hdd cache to array). Could I be overloading the 305w power supply that came with it? nasofbadi-diagnostics-20210730-0913.zip
  9. Btw there's a typo in the wiki: If you have an existing private key mnemonic, place in a text file here /mnt/user/appdata/machinaris/mnenomic.txt as a single line with spaces between the words. If you don't have an existing private key, don't worry! You'll be able to generate a new key on first run. Should be: If you have an existing private key mnemonic, place in a text file here /mnt/user/appdata/machinaris/mnemomic.txt as a single line with spaces between the words. If you don't have an existing private key, don't worry! You'll be able to generate a new key on first run.
  10. Nevermind, just needed to do this: https://github.com/guydavis/machinaris/wiki/Workers#my-network-has-no-local-dns-can-i-use-ips-to-connect-workers
  11. Hi, I'm running a full node on my unraid box, and am trying to run a plotter on another machine. The plotter shows up in the workers, but says "Connection Refused" under last ping to worker. Any advice? Here's the docker compose file for my plotter: version: '3.3' services: machinaris: environment: - mode=plotter - controller_host=192.168.0.12 - TZ=America/New_York ports: - '8926:8926' - '8927:8927' volumes: - '~/.machinaris:/root/.chia:rw' - '/mnt/chia-pool:/plots:rw' - '/mnt/tmp1:/plotting:rw' image: ghcr.io/guydavis/machinaris
  12. I guess my question is, if the data isn't changing much, do we need to check as often? If I was last confident in my parity at 2TB, do I need to re-check my confidence at 2.01 TB or can I wait until 2.1TB? Is confidence inversely proportional to amount of new data written or is it based on another factor like time?
  13. New feature request Run parity check after X amount of data has been written to the array since the last parity check. I've been wondering how much sense it makes to do a parity check when we aren't writing much data to the array?
  14. Could someone update the hddtemp unraid template to use "/dev/sd*[!0-9]" instead of "/dev/sd*". With /dev/sd* you end up with duplicates for sda and sda1.