weirdcrap

Members
  • Posts

    447
  • Joined

  • Last visited

Everything posted by weirdcrap

  1. You need to search for "rtorrent" or "binhex" as that's part of the application name. rutorrent is the web frontend.
  2. Ah OK. For now I'll leave the rules disabled. When I've got some time I'll play around with PFSense and see if I can modify the rules to exclude VOID from my DNS redirecting. Thanks for your help with this! EDIT: I was able to make an alias group for my entire subnet excluding my server and set that as the source for all my firewall rules. Now the container can get straight out to the internet while all other devices must go through pfsense.
  3. I'm not quite sure I understand the caveat you are stating after the however. So if I set my router IP in the name server variable it should allow me to use my firewall rules as I had them. The firewall would take in the DNS query, say I don't know where that is and pass it on up to one of the public servers it is setup to reference, then pass that lookup response back to the docker. Shouldn't it not matter that resolution would fail since resolution is blocked after the initial endpoint lookup as part of the IPTable rules anyway? Or am I grossly misunderstanding how this works?
  4. I think I found the problem, in addition to the LAN rules there was a NAT rule I had forgotten about. So I guess my setup at home breaks the container because it wants to reach the internet directly and it won't let my firewall play middleman in the DNS lookup? It's unfortunate that I have to disable them as I liked the peace of mind knowing that IOT devices and things with hard coded DNS can't just bypass my filtering.
  5. Well I disabled the rules and reloaded the firewall and it still won't resolve... Will I be able to test DNS resolution from within the container if the initial resolution fails? EDIT: Ah wait I missed on, it appears to be working now... let me do some more checking
  6. The defaults that come with the docker: 209.222.18.222,84.200.69.80,37.235.1.174,1.1.1.1,209.222.18.218,37.235.1.177,84.200.70.40,1.0.0.1 I do have some rules in PFSense to prevent DNS lookups from getting out directly to the internet (I want everything to go through unbound so it can be checked/filtered). I'm going to try turning those off and see if that fixes it.
  7. So OpenVPN also appears to be broken? It seems like no matter what the docker can't resolve any endpoints.... Error: error sending query: Could not send or receive, because of network error 2021-05-20 08:29:47,603 DEBG 'start-script' stdout output: [debug] Having issues resolving name 'ca-montreal.privacy.network', sleeping before retry... 021-05-20 08:35:19,650 DEBG 'start-script' stdout output: [debug] Having issues resolving name 'ca-ontario.privacy.network', sleeping before retry... I have about half a dozen endpoints set up in my openVPN config: client dev tun proto udp remote sweden.privacy.network 1198 remote swiss.privacy.network 1198 remote ca-ontario.privacy.network 1198 remote ca-montreal.privacy.network 1198 remote ca-toronto.privacy.network 1198 remote ca-vancouver.privacy.network 1198 remote de-frankfurt.privacy.network 1198 remote ro.privacy.network 1198 resolv-retry infinite nobind persist-key cipher aes-128-cbc auth sha1 tls-client remote-cert-tls server What's even odder is I have the PIA windows client and it works fine with these endpoints on my network at home. I also have this same docker container on another server (different network) using these exact same OpenVPN settings and they work fine.
  8. Ah of course, why didn't I think to check if the endpoint was down. I'm just glad it isn't me, I thought I was missing something super obvious in the setup of the container. I'll keep an eye out for your fix and/or see if the endpoint comes back up.
  9. I can't for the life of me get any of the VPN containers to start with Wireguard support (I've tried rtorrent and qbittorrent). The only symptoms I can see are the wg0.conf file is never generated and the webui never starts. The logs always seem to just stop after adding DNS servers to resolv.conf... I'm not seeing anything in the FAQs and these threads are so unruly and hard to navigate through if you don't have a specific error to search for. These are freshly pulled containers, no previous appdata or template was available so these should have all the latest changes in them. EDIT: I'm seeing: 2021-05-18 08:32:52,689 DEBG 'start-script' stderr output: Error: error sending query: Could not send or receive, because of network error 2021-05-18 08:32:52,690 DEBG 'start-script' stdout output: [debug] Having issues resolving name 'nl-amsterdam.privacy.network', sleeping before retry... 2021-05-18 08:34:57,807 DEBG 'start-script' stderr output: Error: error sending query: Could not send or receive, because of network error 2021-05-18 08:34:57,808 DEBG 'start-script' stdout output: [debug] Having issues resolving name 'nl-amsterdam.privacy.network', sleeping before retry... 2021-05-18 08:37:02,930 DEBG 'start-script' stderr output: Error: error sending query: Could not send or receive, because of network error 2021-05-18 08:37:02,931 DEBG 'start-script' stdout output: [debug] Having issues resolving name 'nl-amsterdam.privacy.network', sleeping before retry... 2021-05-18 08:39:08,052 DEBG 'start-script' stderr output: Error: error sending query: Could not send or receive, because of network error 2021-05-18 08:39:08,053 DEBG 'start-script' stdout output: [debug] Having issues resolving name 'nl-amsterdam.privacy.network', sleeping before retry... 2021-05-18 08:41:13,168 DEBG 'start-script' stderr output: Error: error sending query: Could not send or receive, because of network error 2021-05-18 08:41:13,169 DEBG 'start-script' stdout output: [debug] Having issues resolving name 'nl-amsterdam.privacy.network', sleeping before retry... Trying to dig or ping anything returns no servers could be reached errors. I can see the name servers defined in /etc/resolv.conf though they just flat out don't work... DNS seems to be broken in the container? I can ping IP addresses but no hostnames at all... EDIT: Ok so my issues look very similar to: Which never got an answer or reply and I can't seem to find anyone else who's having this problem. @binhex Is there any further testing I can do?
  10. I checked overall CPU usage on the stats page as well as individual core utilization on the WebUI dashboard. With nothing else running except a file transfer I don't see utilization over 5% on any single core. There may be an occasional spike but it always quickly drops back down to almost nothing. Yes we are referring to roughly the same speeds, I do the big B for bytes. I agree I have also noticed In the last few months it does seem to linger around 1MB/s more so than it used to. When this issue first started for me when the speed dropped it would drop down into the kilobytes and stay there until I cancelled it. 1MB/s is an improvement over the dial-up like speeds I was getting before.
  11. In my testing hardware doesnt seem to make a difference. I have two i7 Haswells each with 32GB of RAM and they struggle just as badly as my old AMD FX-6300 did. I would be surprised if Hardlinks being one or off makes a difference. I have that setting on and never considered turning it off. If you think you see improvements with it off I'll give it a shot. Like right now I just started a transfer no more than a few minutes ago and the speed has already tanked to barely 1MB/s. My CPU usage on both servers is >5%.
  12. Ah I didn't realize you weren't already using bwlimit. That might explain why I haven't seen the CPU spikes you have. My remote server shares bandwidth with a friend's business so I always make sure to only use roughly have of the available bandwidth. So with the bandwidth limit can your WireGuard transfer consistently maintain it's speed? I'm still seeing my speeds drop to 1MBps > at random points during my tests with my bwlimit of 5MBps
  13. I feel your pain. Once a month when I sync my servers I get frustrated and do a day of furious research while I baby sit the transfer. So far I haven't found anything to resolve the issue but please do report back if you figure it out first
  14. My LAN MTUs are all 1500 and running traceroute --mtu myendpoint shows my MTU is not incremented down at all before it reaches it's destination out on the internet so the default wireguard setting of 1420 should be correct for me and my network. I was simply saying I'd be willing to try tuning it lower just to see if it makes any difference if you had promising results. In my experience it doesn't seem to have any effect.
  15. Yeah I've read that the default is 1420 but I figured it couldn't hurt to set it manually in case it wasn't auto detecting correctly. I had also tried going all the way down to 1380 and it didn't seem to make much of any improvement. I'm not sure if taking my MTU to far down could be making things worse so maybe 1380 was to far the other way and 1412 is the sweet spot? Let me know if you can get consistent results with several large transfers. It would be great to get some sort of debug output from wireguard like I mentioned a few posts back with the kernel debug options. I don't know if we'll ever nail this down without some logs or WG dev help.
  16. I have not seen any correlation between CPU usage spikes and my wireguard performance woes. My processor can be entirely idle with nothing running and I'll still have a speed drop. Anecdotally I have had some marginal improvements in how long I can maintain a transfer at speed through the tunnel by adjusting the MTU for the tunnel down to 1420. However this is not fool proof and it will still lose speed at some point and require me to restart the transfer. At 1420 I was able to move about 300GB of data at speed over the course of 12-14 hours. In total this month I was able to move about 700GB and only had to restart the transfer 4-5 times.
  17. Yes I've written WireGuard off at this point, its nice for basic remote access but it's hot garbage for file transfers. I don't know if it's UnRAID's implementation or if WireGuard just sucks at prolonged file transfers but I can't even get through a single 8GB file without my speed tanking to >100KB/s. Supposedly there are kernel level debug options to get some logs from WireGuard but I never got a response from LJM42 if those were present in UnRAID or not: At some point if I have tons of free time I might try to setup a WireGuard tunnel outside of UnRAID in Linux. But at this point I just go completely around the WireGuard tunnel.
  18. Well after upgrading my existing gaming PC I decided to take the old Mobo and CPU and put it into VOID as a major upgrade. So now VOID has an i7-4790K and an ASUS ROG Maximus VII Hero with an Intel Killer NIC. A complete change of hardware has made no difference in this issue either. I made it about 30 minutes into copying a 50GB ISO file and the speed has tanked already.
  19. lol oh dang I totally forgot about that. That would probably do it, it was for my previous mobo. Testing now. EDIT: yeah that was it. You da man squid. It would have taken me days of frustration before I found that on my own haha.
  20. Solution: Check your syslinux config and make sure its standard, I had changed mine for an old mobo and never removed a switch I had added. I recently upgraded my gaming PC to a Ryzen 5000 build so I moved my old hardware to VOID as a major upgrade. I have an Asus Maximus VII Hero and an Intel i7-4790k. Intel Ark indicates I have VT-x and VT-d support on the CPU and I can see options to enable them in the BIOS: However whenever I boot into UnRAID it reports that IOMMU is disabled? This is not a deal breaker for me as I wasn't planning on using VMs on here but I thought it would be nice to have the option at least. Is there another setting I'm missing here? My Googling suggests that those should be the only two options I need to turn on. I updated my BIOS to the pre-beta BIOS (I'm not keen on running a beta BIOS) and that didn't make a difference. void-diagnostics-20210318-1942.zip
  21. Yeah I do run monthly parity checks, NODE gets it on the 1st and VOID on the 15th. NODE completed its parity check on 3/1 with no errors and this disk reported no issues. Then a week later I remove it from NODE drive it 4 hours back to my house to put it in VOID and it fails in 30 minutes. I'm not sure how else I could have caught this earlier except with monthly SMART testing of all disks.
  22. Alright thanks. Lesson learned, don't trust a used disk just because it didn't report any problems. I also need to be better about regularly running SMART tests on my array disks so I can hopefully catch this stuff before its a catastrophic failure and my array is no longer protected.
  23. @JorgeBIt immediately fails both tests. SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 31759 45736176 # 2 Short offline Completed: read failure 10% 31759 45739672 That's why I was surprised. NODE hadn't recorded any pending or reallocated sectors for this disk. Then I brought it home (carefully packed in a HDD box with padding) and it just immediately fails right off the bat. Thankfully it looks like B&H Photo has the 6TB red plus drives in stock. Final question: Since I put that 6TB disk in and started the rebuild, I have to replace it with a 6TB or bigger correct?
  24. Did you see my edit about the 31 reallocated sectors? Those weren't there when I took the disk out of NODE. Do you still think its a power/cable issue? These are all hot swap bays where I don't have to mess with cabling and the old disk had no issues in this bay. Is it safe to cancel the rebuild? It will just leave disk 6 emulated?
  25. I recently put a new 8TB drive in NODE and was going to use the existing 6TB drive I replaced in VOID. I stupidly did not preclear the drive as I have done this many times before without issue, but this time, barely 2% into the disk rebuild on VOID, the disk threw 1500 + read errors, write errors, then disabled the disk. What is my best course of action here? I can't put the old 2TB disk I replaced it with back into the server since I already started the rebuild right? My parity is in a paused "Read Check" state. Should I cancel it? Because UnRAID disabled the disk I can't run SMART tests from the GUI nor can i seem to call the disk via the command line to try and test it using smartctl. I of course don't have any spare drives on hand. void-diagnostics-20210307-0621.zip EDIT: I can't find it in the terminal because UnRAID seems to have dropped the disk and changed its drive letter? The log shows it changed to sdw after it shit the bed. EDIT2: Holy shit that went south quickly. I took it out of NODE and there were no reallocated sectors. Now its got 31 reallocated sectors in a matter of minutes SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 224 196 021 Pre-fail Always - 7775 4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2967 5 Reallocated_Sector_Ct 0x0033 199 199 140 Pre-fail Always - 31 7 Seek_Error_Rate 0x002e 100 253 000 Old_age Always - 0 9 Power_On_Hours 0x0032 057 057 000 Old_age Always - 31759 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 40 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 19 193 Load_Cycle_Count 0x0032 190 190 000 Old_age Always - 30273 194 Temperature_Celsius 0x0022 124 094 000 Old_age Always - 28 196 Reallocated_Event_Count 0x0032 169 169 000 Old_age Always - 31 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0 So I guess this disk was just a ticking time bomb from the get go?