Jump to content

Carlos Talbot

Members
  • Content Count

    12
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Carlos Talbot

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I use to use Synology before switching to Unraid. You won't find any support for iSCSI with Unraid. Synology has better integration into vSphere (e.g. the NFS VAAI Plug-in ). There is no NFS 4.0 in Unraid.
  2. Thanks. I'm back to XFS and things are noticeably different. I tried the same copy process to a share that has cache enabled and load looked fine. I will avoid BTRFS until I hear otherwise.
  3. So Single BTRFS cache drive doesn't resolve the issue. What's the easiest way to reformat the drive to XFS? Thanks.
  4. As I stated in my original post I'm on 6.8.0-rc7. With regards to my particular issue (cache pool/high load) it was reported in 2017 so over 2 years now.
  5. Got it. I'm in the process of switching from 2 drives in the cache pool to 1 and keeping it at BTRFS. I'm just surprised this issue is still ongoing as it's very easy to reproduce.
  6. Sorry, yes, it's set to Yes for cache. This got me thinking. I tried the same copy command to a another share that is not using cache. Sure enough the load held steady at 5 and never got higher (this also includes a plex transcode in the background). Containers were accessible without issue. So it does appear to be the cache that is affecting this.
  7. I recently upgraded to rc7 thinking this problem was behind me. It still persists. It's very easy to reproduce. I copy several GB of files from an unassigned disks to a /mnt/user path. After the memory buffer fills and writes are flushed to disk I start seeing the IO wait shoot up to 45, disrupting all running dockers. It takes at least 5 minutes for the load to subside and system return to normal. I have a cache pool setup with 2 SSDs (no Samsung drives at this point). Is BRTFS the culprit? I'll have to go back to rc5 as the lack of nvidia drivers is killing my performance as well. tower-diagnostics-20191127-1917.zip
  8. I do see a negative impact. I experience two scenarios, the first is trying to shutdown a VM and it hangs. During the shutdown I see continuous stale NFS messages in the tcpdump between the vSphere host and Unraid. I eventually have to power off the VM forcibly. The second issue is where all the VMs are shutdown. I can see this from the vCenter gui. I still have access to vCenter as it's the only VM not using an NFS datastore from Unraid.
  9. I have a VMware ESXi host with vSphere 6.7 using Unraid for NFS mounted datastores. After a day of use I am seeing errors from the NFS client (ESXi host) about stale NFS handles as shown in the tcpdump below collected from Unraid. I've also attached logs. I see this has been a problem with previous versions. Has this ever been addressed? Anyone using Unraid to host VMware datastores via NFS without issue? Thanks. 14:17:10.824778 IP Tower.local.nfsd > 192.168.10.1.853: Flags [P.], seq 1074985:1075021, ack 36222556, win 16384, options [nop,nop,TS val 3962751192 ecr 44536664], length 36: NFS reply xid 1524920287 reply ok 32 read ERROR: Stale NFS file handle tower-diagnostics-20191103-1625.zip
  10. I bought a pair of ConnectX 2 cards with a 3m DAC cable on ebay from this seller: https://www.ebay.com/itm/273916866741 His price though has gone up since I purchased it earlier this year. I paid about 57 bucks. I'm using the two cards between my Unraid server (Supermicro X9SCL) and an HP DL380 G7 that's running ESXi. Unraid is hosting the VMs via NFS datastores. I've attached a screenshot of a storage vMotion session. Note, I've not been able to go over 7Gb/sec, possibly because of my PCIe slots and the limited lanes. For 10Gb cards you need to make sure your servers can address the bandwidth. lspci will help with this: https://community.mellanox.com/s/article/understanding-pcie-configuration-for-maximum-performance
  11. Now that 6.8-rc1 has been released with wireguard support, does it make it easier for you to support wireguard in your docker images for deluge and qbittorrent? Thanks.