Carlos Talbot

Members
  • Posts

    15
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Carlos Talbot's Achievements

Noob

Noob (1/14)

4

Reputation

  1. Dumb question, how do you rollback? I just upgraded my image this morning and am experiencing the same issue and don't believe I'm running out of memory. 2020-03-30 10:40:51,514 DEBG 'watchdog-script' stderr output: pgrep: cannot allocate 4611686018427387903 bytes 2020-03-30 10:40:52,516 DEBG 'watchdog-script' stderr output: pgrep: cannot allocate 4611686018427387903 bytes
  2. Has anyone figured out how to route only a subset of docker containers through the VPN? This seems to be of interest for many of us but I haven't seen a step by step guide on how it would be implemented. Thanks.
  3. @Dataone may I ask how you've configured the field "Peer allowed IPs" in your wireguard settings? By default it's set to 0.0.0.0 which routes all traffic on the UnRaid server through the vpn tunnel. I assume you've restricted it to just the docker containers on the bridge? Also, do you know how to block traffic for those selected dockers if the vpn link goes down? Thanks.
  4. I use to use Synology before switching to Unraid. You won't find any support for iSCSI with Unraid. Synology has better integration into vSphere (e.g. the NFS VAAI Plug-in ). There is no NFS 4.0 in Unraid.
  5. Thanks. I'm back to XFS and things are noticeably different. I tried the same copy process to a share that has cache enabled and load looked fine. I will avoid BTRFS until I hear otherwise.
  6. So Single BTRFS cache drive doesn't resolve the issue. What's the easiest way to reformat the drive to XFS? Thanks.
  7. As I stated in my original post I'm on 6.8.0-rc7. With regards to my particular issue (cache pool/high load) it was reported in 2017 so over 2 years now.
  8. Got it. I'm in the process of switching from 2 drives in the cache pool to 1 and keeping it at BTRFS. I'm just surprised this issue is still ongoing as it's very easy to reproduce.
  9. Sorry, yes, it's set to Yes for cache. This got me thinking. I tried the same copy command to a another share that is not using cache. Sure enough the load held steady at 5 and never got higher (this also includes a plex transcode in the background). Containers were accessible without issue. So it does appear to be the cache that is affecting this.
  10. I recently upgraded to rc7 thinking this problem was behind me. It still persists. It's very easy to reproduce. I copy several GB of files from an unassigned disks to a /mnt/user path. After the memory buffer fills and writes are flushed to disk I start seeing the IO wait shoot up to 45, disrupting all running dockers. It takes at least 5 minutes for the load to subside and system return to normal. I have a cache pool setup with 2 SSDs (no Samsung drives at this point). Is BRTFS the culprit? I'll have to go back to rc5 as the lack of nvidia drivers is killing my performance as well. tower-diagnostics-20191127-1917.zip
  11. I do see a negative impact. I experience two scenarios, the first is trying to shutdown a VM and it hangs. During the shutdown I see continuous stale NFS messages in the tcpdump between the vSphere host and Unraid. I eventually have to power off the VM forcibly. The second issue is where all the VMs are shutdown. I can see this from the vCenter gui. I still have access to vCenter as it's the only VM not using an NFS datastore from Unraid.
  12. I have a VMware ESXi host with vSphere 6.7 using Unraid for NFS mounted datastores. After a day of use I am seeing errors from the NFS client (ESXi host) about stale NFS handles as shown in the tcpdump below collected from Unraid. I've also attached logs. I see this has been a problem with previous versions. Has this ever been addressed? Anyone using Unraid to host VMware datastores via NFS without issue? Thanks. 14:17:10.824778 IP Tower.local.nfsd > 192.168.10.1.853: Flags [P.], seq 1074985:1075021, ack 36222556, win 16384, options [nop,nop,TS val 3962751192 ecr 44536664], length 36: NFS reply xid 1524920287 reply ok 32 read ERROR: Stale NFS file handle tower-diagnostics-20191103-1625.zip
  13. I bought a pair of ConnectX 2 cards with a 3m DAC cable on ebay from this seller: https://www.ebay.com/itm/273916866741 His price though has gone up since I purchased it earlier this year. I paid about 57 bucks. I'm using the two cards between my Unraid server (Supermicro X9SCL) and an HP DL380 G7 that's running ESXi. Unraid is hosting the VMs via NFS datastores. I've attached a screenshot of a storage vMotion session. Note, I've not been able to go over 7Gb/sec, possibly because of my PCIe slots and the limited lanes. For 10Gb cards you need to make sure your servers can address the bandwidth. lspci will help with this: https://community.mellanox.com/s/article/understanding-pcie-configuration-for-maximum-performance
  14. Now that 6.8-rc1 has been released with wireguard support, does it make it easier for you to support wireguard in your docker images for deluge and qbittorrent? Thanks.