joeloooiz

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by joeloooiz

  1. After setting up Wireguard using the instructions above, I'm getting an error in the logs that my Public Key is blank. Looking at wg0.conf, it is indeed blank on the Public Key line. Container is set to privileged, added the new key for wireguard and I'm at a complete loss. Any help would be greatly appreciated. Of note - I'm running this in Portainer but its essentially all the same otherwise. Edit: I notice that my Address line is blank under "interface" as well. If I use the container console I can generate new private and public keys but they do not get stored into wg0.conf nor do I think it is helpful for the purposes of this exercise. Another point of info - when starting the container, it says that it is attempting to connect to the PIA Wireguard API, hangs and then moves on - I'm not actually sure it is connecting or not. There are no errors to be seen but perhaps that's what is not happening? I'm just lost so any help would be greatly appreciated.
  2. Just curious - 1) where do you get lists like these and 2) how do you import them into Radarr? I'm genuinely interested.
  3. My branch says develop but is not editable. Any idea how I'm supposed to change this? I have the same error as above (Branch develop is for a previous version of Radarr, set branch to 'Aphrodite' for further updates)
  4. Looks like SMB speeds are back to being pretty slow on 6.8.1. I had reverted to 6.7.2 due to slow SMB in 6.8 but was told this was corrected in 6.8.1....doesn't seem like its any better. Guess I'll revert back until a later release
  5. So this got me thinking - I have my default route set on a 10 Gb interface. I've got a four port daughter card (2x 10 Gb, 2x Gb) so I changed the default route to a gigabit port that uses a standard MTU size (1500 whereas the 10 Gb runs at 9000 MTU). Once I did that it worked perfectly first time. Thanks so much for the help!
  6. This is true except when PiHole is off it defaults to my router's interface which has a direct line out to the ISP's DNS servers.
  7. Hi - I tried doing that but get stuck at the same spot. It never moves past "Updating available builds."
  8. Hello all - Looking for a little help. I was running Nvidia build 6.8 but was having a ton of issues with SMB speeds. I rolled back to 6.7.2 and have been chugging right along for a couple of weeks now. I saw that 6.8.1 was released which apparently corrects slow SMB speeds so I am ready to try upgrading again. Problem is if I try to get into the unraid nvidia plugin it gets stuck at "updating available builds." It has been since I rolled back and now I'm not sure what to do. I have verified network connectivity is correct, ensured PiHole isn't blocking access to anything that the plugin is asking for....no matter how long I wait it never gets past the updating builds screen. Any assistance would be greatly appreciated!
  9. Hi - I am using the ovpn files downloaded directly from PIA - I haven't specified any IP addresses anywhere but am relying (I assume) on DNS to resolve that IP.
  10. Yes but I am using the gateways that support Port Forwarding. I tried all of the available sites on their list of port forward capable gateways with no luck. Any suggestions? Edit: I also tried disabling port forwarding and that doesn't work either.
  11. Hello! I've just setup DelugeVPN for the first time using SpaceInvaderOne's guide. I've got valid credentials from PIA that I was previously using on a Torrent VM running deluge. Unfortunately nothing I do seems to work with establishing and maintaining the tunnel. If I turn off the VPN Deluge starts fine but with the VPN on nothing works. I have pulled the latest certs, tried all port forwarding capable gateways and I know my credentials are valid. That is what I get over and over again no matter what I do (aside from turn off the VPN). Any assistance would be greatly appreciated!
  12. Hello all, I am trying to migrate from a 720xd to a 730xd. Luckily so far I've only conducted test runs with a my second 720xd, not the one with my real live data as I've been wholly unsuccessful in getting things to work right. From what I've read I should be able to simply remove the disks, place them into the new server, assign them to the same disk numbers and start the array. I am able to assign the disks back to the proper disk number (based on serial number) but I can't start the array because it tells me I've got "Too many wrong and/or missing disks." The disks are all accounted for (two cache devices included), each is mapped back to the correct spot yet I am unable to start the array. I'm preparing for a full scale upgrade of an in-production 720xd so any assistance I can get would be greatly appreciated!
  13. So I have a few 10 TB disks aside from the two parity disks (four others I think for a total of six). I am also ready to move from a DAS (SA120) to this storage pod which would migrate another (12) 8 TB drives to this thing so spreading them out may not provide me much relief seeing as how I need at least 12 more spots. Once those 12 are in the server I'll have a total of 29 drives out of the available 45 which means I could keep at least the 10 TB drives to one per port multiplier. Its a conundrum to say the least.
  14. I got really lucky and got my hands on a Backblaze Storage pod. It uses port multipliers to go from one to five SATA ports which allows for up to 45 drives in the server. I am currently in need of a parity rebuild which, by UnRAID's estimation would take 8 more days (they are two 10 TB parity drives). My question is - my two parity drives are plugged into the port multipliers which are then plugged into a RAID card. My motherboard has SATA ports on it and I'm curious if I should hard wire the two parity drives to those ports in order to reduce the amount of time it would take to rebuild the parity? I have a few other creative ideas on how to do this but essentially I could a) not use the port multipliers and hardwire the two drives directly or b) hardwire one of the port multipliers to a sata port on the motherboard to at least exclude the RAID card. I'm not sure which is best and would require the least amount of reconfiguring and/or rebuild time. Any advice would be greatly appreciated.
  15. So I've recently upgraded to RC5 and took note of the Pause feature for parity rebuilds. I'm glad its here! I am curious however if you pause the parity rebuild and reboot or power off the server, does it pickup where it left off or does it start over?
  16. You are absolutely correct. I have my drives connected to a SATA expander. I got my hands on an old Backblaze storage pod 1 which uses port multipliers to get up to 45 drives into it. Its slow indeed.
  17. I honestly don't know how it did that. I don't remember specifically telling it to format the drives.
  18. Thank you for the response. Before I could try that a reboot of the server showed the drives mounted and a part of the array but a parity rebuild needed to take place. I guess I'll wait 8 more days to find out if that worked or not. Thanks!
  19. Apologies if this has been covered elsewhere before. I left out of town with two, 10 TB drives cleaning. I just got back and have an error that says both disks are unmountable. I am attaching my diagnostics for assistance. I appreciate any help I can get. Thanks! tower-diagnostics-20190314-1654.zip