Jump to content

gcoppin

Members
  • Posts

    7
  • Joined

  • Last visited

Posts posted by gcoppin

  1. Hi,

    Thank you for sharing your advices and experiences. I agree with you : 9p is as 'no go'. I will continue to use NFS as it seems to provide the best results but it's still not enough for my production needs.

     

    What do you think about my last post ? Does it seems normal to you that a mounted share from the 'Unassign Devices' plugin ( 1 nvme drive only ) gives me correct speed compared to the mounted share from the cache  ( 3 nvme drives raid 0 ) that gives me much much less performances ? I'm a bit puzzled...why do I have this difference...?

     

    At the moment I'm thinking to create a raid 0 with the 'Unassign Devices' plugin as described here and see the results. It would be really unfortunate to notice a loss of performance when using raid nvme...

     

     

  2. Hi,

     

    I've entirely re-created my cache pool (pre-clean/format/etc) but it didn't change much. Therefore I decided to run a simpler test and mounted 1 single nvme disk (BTRFS) using the Unassign Devices plugin and nfs.

     

     

    Here is the result from unRaid

    Quote

     

    root@unRaid:~# dd if=/dev/zero of=/mnt/disks/test/speedtest bs=1G count=5 oflag=direct

    5+0 records in

    5+0 records out

    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 4.27351 s, 1.3 GB/s

     

     

    Here is the result from within the VM:

    Quote

    user@VM-linux:~$ dd if=/dev/zero of=/mnt/test/speedtest bs=1G count=5 oflag=direct
    5+0 records in
    5+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 3.86565 s, 1.4 GB/s


     

    As we can see it's slightly faster in the VM than directly inside unRaid. The graph in unRaid stats is matching the output from the console. It's all pretty good !!!

     

    The question now is : How comes the single nvme disk using Unassigned Devices plugin gives expected results and the usage of the cache pool is not ?? I understand the VM is running on this cache pool but it has almost no activity (stats graph showing some little spikes every now and then - just around 2.5MB/s). So it shouldn't be the problem, isn't it ?

     

    I followed a lot of threads on the forum/reddit/google etc but all are dead ends. I noticed @jonp @johnnie.black seems to have a lot of experience with disks speed/caching/shares etc. Maybe you could have some useful insight for me ? Or maybe some ideas to dig further ?

     

    I've got a big project coming soon which requires the full speed of my cache pool and I start to worry. Any help would be really appreciated. 

     

    Thank you again!

     

     

    EDIT : Here is the last lead I followed but there is no more answer from the customer slow-nvme-cache-pool

  3. Hi,

    I finally took the time to create a new thread about this issue. I apologies for the threads I resurrected in the past 😇

    As mentioned in the title I've got some poor disk speed using mounted share within my Linux VM (Ubuntu 18.04.3). I did few tests I'd like to share with you. Hopefully someone will be able to help me.

     

    My VM is on the cache disk and my user share is on my data disk using cache mode : 'YES'.

     

    Here are some stats gathered from unRaid using SSH :

     

    Cache Drive :

    Quote

    root@unRaid:/mnt/cache# sudo dd if=/dev/zero of=./speedtest bs=8k count=100k; rm -f ./speedtest
    102400+0 records in
    102400+0 records out
    838860800 bytes (839 MB, 800 MiB) copied, 0.437267 s, 1.9 GB/s

    Disk Drive :

    Quote

    root@unRaid:/mnt/user# sudo dd if=/dev/zero of=./speedtest bs=8k count=100k; rm -f ./speedtest
    102400+0 records in
    102400+0 records out
    838860800 bytes (839 MB, 800 MiB) copied, 4.54764 s, 184 MB/s

     

     

    Here are the stats from the VM (living on cache disk) :

     

    Quote

    user@VM-linux:~$ dd if=/dev/zero of=./speedtest bs=8k count=100k; rm -f ./speedtest
    102400+0 records in
    102400+0 records out
    838860800 bytes (839 MB, 800 MiB) copied, 0.519352 s, 1.6 GB/s

    While the speed decreased by +/-19% I believe it's normal due to the fact it's a VM.

     

    Now let's mount the user share using cache mode : 'YES'.

     

    Using NFS :  (/etc/fstab : 192.168.0.54:/mnt/user/work       /mnt/work      nfs auto 0 0)

    Quote

    user@VM-linux:/mnt/work$ dd if=/dev/zero of=./speedtest bs=8k count=100k; rm -f ./speedtest
    102400+0 records in
    102400+0 records out
    838860800 bytes (839 MB, 800 MiB) copied, 1.51581 s, 553 MB/s

     

    Using CIFS : (/etc/fstab : //192.168.0.54/work /mnt/work cifs auto,guest,uid=user,gid=user,vers=3.0,mfsymlinks 0 0)

    Quote

    user@VM-linux:/mnt/work$ dd if=/dev/zero of=./speedtest bs=8k count=100k; rm -f ./speedtest
    102400+0 records in
    102400+0 records out
    838860800 bytes (839 MB, 800 MiB) copied, 1.35366 s, 620 MB/s

     

    Using p9 : (/etc/fstab : work /mnt/work 9p trans=virtio,version=9p2000.L,_netdev,rw 0 0  --  path in VM xml file : /mnt/user/work)

    Quote

    user@VM-linux:/mnt/work$ dd if=/dev/zero of=./speedtest bs=8k count=100k; rm -f ./speedtest
    102400+0 records in
    102400+0 records out
    838860800 bytes (839 MB, 800 MiB) copied, 41.5812 s, 20.2 MB/s

     

    Using sshfs : (/etc/fstab : fuse IdentityFile=/home/user/.ssh/id_rsa,uid=user,gid=user,users,idmap=user,noatime,allow_other,_netdev,reconnect,exec,rw 0 0)

    Quote

    user@VM-linux:/mnt/work$ dd if=/dev/zero of=./speedtest bs=8k count=100k; rm -f ./speedtest
    102400+0 records in
    102400+0 records out
    838860800 bytes (839 MB, 800 MiB) copied, 24.5344 s, 34.2 MB/s

     

    As you can see those stats are quite poor compared to the one from unRaid directly. My knowledges regarding VM's performances are quite limited but here are the assumption I had when I did those tests for the first time (even before using the user share with cache mode to 'YES') :

    - I thought p9 would be the closest to 'barebone' performances but it's almost the inverse. Maybe something misconfigured ? I tried with default settings and the other options too.

    - I thought sshfs would be similar to 'barebone' knowing I use a virtual bridge (br0) or would at least reach what nfs and cifs can reach.

     

    After all those tests I don't really know what to do to improve the situation. Even if the speed with nfs/cifs is not bad ,the cut by +/- 2/3 in terms of performance seems quite big in my opinion. And before I used the cache mode to 'YES' for the user share, the cifs and nfs speed was reaching +/-20MB/s max.

     

    The current VM is using 30 out of 32 cores and 20GB out of 32GB of RAM. I've attached the xml setting file if you'd like to have a deeper look. Same for the diagnostics file.

     

    Thank you !

     

    EDIT : I can confirm the same results under a Windows VM using cifs.

     

    unraid-diagnostics-20191208-1319.zip vm-settings.txt

  4. Hi,

     

    I've got exactly the same issue with my linux vm (ubuntu 18.04). Is there any news regarding this ? Can a moderator confirm linux has the same issue as osx or not?

    @jonp any hint ?

     

    Adding some outputs :

     

    Quote

     lshw -C network
      *-network                 
           description: Ethernet interface
           product: VMXNET3 Ethernet Controller
           vendor: VMware
           physical id: 0
           bus info: pci@0000:05:00.0
           logical name: enp5s0
           version: 01
           serial: 52:54:00:7b:ec:75
           size: 1Gbit/s
           capacity: 10Gbit/s
           width: 32 bits
           clock: 33MHz
           capabilities: bus_master cap_list rom ethernet physical tp 1000bt-fd 10000bt-fd
           configuration: autonegotiation=off broadcast=yes driver=vmxnet3 driverversion=1.4.13.0-k-NAPI duplex=full ip=192.168.0.63 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
           resources: irq:21 memory:93003000-93003fff memory:93002000-93002fff memory:93000000-93001fff memory:93040000-9307ffff

     

    From VM (Ubuntu 18.04.3 - linux 4.16.0-041600-generic) :

    Quote

     

    dd if=/dev/zero of=./output bs=8k count=20k; rm -f ./output
    20480+0 records in
    20480+0 records out
    167772160 bytes (168 MB, 160 MiB) copied, 7.768 s, 21.6 MB/s

     

     

     

     

    From host (Linux unRaid 4.19.56-Unraid):

     

    Quote

    dd if=/dev/zero of=./output bs=8k count=20k; rm -f ./output
    20480+0 records in
    20480+0 records out
    167772160 bytes (168 MB, 160 MiB) copied, 1.22464 s, 137 MB/s

     

    fstab :

     

    Quote

    sshfs#[email protected]:/mnt/user/work       /mnt/work  fuse IdentityFile=/home/username/.ssh/id_rsa,uid=userid,gid=usergroup,users,idmap=user,noatime,allow_other,_netdev,reconnect,exec,rw 0 0

     

    Thank you,

     

    Geoffrey

×
×
  • Create New...