weirdcrap

Members
  • Content Count

    305
  • Joined

  • Last visited

Community Reputation

8 Neutral

About weirdcrap

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

1277 profile views
  1. Yeah I've read that the default is 1420 but I figured it couldn't hurt to set it manually in case it wasn't auto detecting correctly. I had also tried going all the way down to 1380 and it didn't seem to make much of any improvement. I'm not sure if taking my MTU to far down could be making things worse so maybe 1380 was to far the other way and 1412 is the sweet spot? Let me know if you can get consistent results with several large transfers. It would be great to get some sort of debug output from wireguard like I mentioned a few posts back with the kernel debug optio
  2. I have not seen any correlation between CPU usage spikes and my wireguard performance woes. My processor can be entirely idle with nothing running and I'll still have a speed drop. Anecdotally I have had some marginal improvements in how long I can maintain a transfer at speed through the tunnel by adjusting the MTU for the tunnel down to 1420. However this is not fool proof and it will still lose speed at some point and require me to restart the transfer. At 1420 I was able to move about 300GB of data at speed over the course of 12-14 hours. In total this month I was
  3. Yes I've written WireGuard off at this point, its nice for basic remote access but it's hot garbage for file transfers. I don't know if it's UnRAID's implementation or if WireGuard just sucks at prolonged file transfers but I can't even get through a single 8GB file without my speed tanking to >100KB/s. Supposedly there are kernel level debug options to get some logs from WireGuard but I never got a response from LJM42 if those were present in UnRAID or not: At some point if I have tons of free time I might try to setup a WireGuard tunnel outside of UnRA
  4. Well after upgrading my existing gaming PC I decided to take the old Mobo and CPU and put it into VOID as a major upgrade. So now VOID has an i7-4790K and an ASUS ROG Maximus VII Hero with an Intel Killer NIC. A complete change of hardware has made no difference in this issue either. I made it about 30 minutes into copying a 50GB ISO file and the speed has tanked already.
  5. lol oh dang I totally forgot about that. That would probably do it, it was for my previous mobo. Testing now. EDIT: yeah that was it. You da man squid. It would have taken me days of frustration before I found that on my own haha.
  6. Solution: Check your syslinux config and make sure its standard, I had changed mine for an old mobo and never removed a switch I had added. I recently upgraded my gaming PC to a Ryzen 5000 build so I moved my old hardware to VOID as a major upgrade. I have an Asus Maximus VII Hero and an Intel i7-4790k. Intel Ark indicates I have VT-x and VT-d support on the CPU and I can see options to enable them in the BIOS: However whenever I boot into UnRAID it reports that IOMMU is disabled? This is not a deal breaker for me as I wasn't p
  7. Yeah I do run monthly parity checks, NODE gets it on the 1st and VOID on the 15th. NODE completed its parity check on 3/1 with no errors and this disk reported no issues. Then a week later I remove it from NODE drive it 4 hours back to my house to put it in VOID and it fails in 30 minutes. I'm not sure how else I could have caught this earlier except with monthly SMART testing of all disks.
  8. Alright thanks. Lesson learned, don't trust a used disk just because it didn't report any problems. I also need to be better about regularly running SMART tests on my array disks so I can hopefully catch this stuff before its a catastrophic failure and my array is no longer protected.
  9. @JorgeBIt immediately fails both tests. SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed: read failure 90% 31759 45736176 # 2 Short offline Completed: read failure 10% 31759 45739672 That's why I was surprised. NODE hadn't recorded any pending or reallocated sectors for this disk. Then I brought it home (carefully packed in a HDD box with padding) and it just immediately fails right off the bat. Thankfull
  10. Did you see my edit about the 31 reallocated sectors? Those weren't there when I took the disk out of NODE. Do you still think its a power/cable issue? These are all hot swap bays where I don't have to mess with cabling and the old disk had no issues in this bay. Is it safe to cancel the rebuild? It will just leave disk 6 emulated?
  11. I recently put a new 8TB drive in NODE and was going to use the existing 6TB drive I replaced in VOID. I stupidly did not preclear the drive as I have done this many times before without issue, but this time, barely 2% into the disk rebuild on VOID, the disk threw 1500 + read errors, write errors, then disabled the disk. What is my best course of action here? I can't put the old 2TB disk I replaced it with back into the server since I already started the rebuild right? My parity is in a paused "Read Check" state. Should I cancel it? Because UnRA
  12. The standard go file contains only: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & So remove "-p 8088" You can change the port in the webui under Settings > Management Access
  13. Final update. I regret to report that I am now 100% positive this entire issue is caused by WireGuard itself as @claunia mentioned and no amount of setting changes is going to fix it. I re-enabled direct SSH access to NODE through the firewall and restricted it to my current comcast IP. When using the exact same setup and going entirely around the WG tunnel I get my full consistent speeds without any random drops or issues. I transferred roughly 300GB of data over the course of about 12 hours yesterday and NOT ONCE did the speed drop to an unacceptable level (only minor
  14. Some promising results with an MTU of 1380. I was able to complete a 60GB transfer from start to finish with zero drop in speed. I've started a much larger 250GB transfer, we shall see if I can maintain speed through the entire thing. EDIT: ANNNNDDDDD just like that there it goes. This is just ridiculous. EDIT2: I've re-opened SSH up to the world, restricted to my current comcast IP only. I'm running my 250GB transfer completely outside of the WireGuard tunnel and so far so good, but I've said that dozens of times up to this point so I won't hold my breath.
  15. With the GPU changes (https://wiki.unraid.net/Unraid_OS_6.9.0#GPU_Driver_Integration) the old method of enabling Intel QuickSync via the go file is no longer recommended then? # Enable Intel QuickSync HW Transcoding modprobe i915 chmod -R 777 /dev/dri