gcoppin

Members
  • Posts

    7
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

gcoppin's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi, Thank you for sharing your advices and experiences. I agree with you : 9p is as 'no go'. I will continue to use NFS as it seems to provide the best results but it's still not enough for my production needs. What do you think about my last post ? Does it seems normal to you that a mounted share from the 'Unassign Devices' plugin ( 1 nvme drive only ) gives me correct speed compared to the mounted share from the cache ( 3 nvme drives raid 0 ) that gives me much much less performances ? I'm a bit puzzled...why do I have this difference...? At the moment I'm thinking to create a raid 0 with the 'Unassign Devices' plugin as described here and see the results. It would be really unfortunate to notice a loss of performance when using raid nvme...
  2. Hi, I've entirely re-created my cache pool (pre-clean/format/etc) but it didn't change much. Therefore I decided to run a simpler test and mounted 1 single nvme disk (BTRFS) using the Unassign Devices plugin and nfs. Here is the result from unRaid : Here is the result from within the VM: As we can see it's slightly faster in the VM than directly inside unRaid. The graph in unRaid stats is matching the output from the console. It's all pretty good !!! The question now is : How comes the single nvme disk using Unassigned Devices plugin gives expected results and the usage of the cache pool is not ?? I understand the VM is running on this cache pool but it has almost no activity (stats graph showing some little spikes every now and then - just around 2.5MB/s). So it shouldn't be the problem, isn't it ? I followed a lot of threads on the forum/reddit/google etc but all are dead ends. I noticed @jonp @johnnie.black seems to have a lot of experience with disks speed/caching/shares etc. Maybe you could have some useful insight for me ? Or maybe some ideas to dig further ? I've got a big project coming soon which requires the full speed of my cache pool and I start to worry. Any help would be really appreciated. Thank you again! EDIT : Here is the last lead I followed but there is no more answer from the customer : slow-nvme-cache-pool
  3. Hi, I'm surprised nobody replied yet. Does it mean everybody has the same ( maybe expected ?) issue or am I the only one ? I'd like to hear your current experience to confirm or not the current behavior. Thank you
  4. Hi, I finally took the time to create a new thread about this issue. I apologies for the threads I resurrected in the past 😇 As mentioned in the title I've got some poor disk speed using mounted share within my Linux VM (Ubuntu 18.04.3). I did few tests I'd like to share with you. Hopefully someone will be able to help me. My VM is on the cache disk and my user share is on my data disk using cache mode : 'YES'. Here are some stats gathered from unRaid using SSH : Cache Drive : Disk Drive : Here are the stats from the VM (living on cache disk) : While the speed decreased by +/-19% I believe it's normal due to the fact it's a VM. Now let's mount the user share using cache mode : 'YES'. Using NFS : (/etc/fstab : 192.168.0.54:/mnt/user/work /mnt/work nfs auto 0 0) Using CIFS : (/etc/fstab : //192.168.0.54/work /mnt/work cifs auto,guest,uid=user,gid=user,vers=3.0,mfsymlinks 0 0) Using p9 : (/etc/fstab : work /mnt/work 9p trans=virtio,version=9p2000.L,_netdev,rw 0 0 -- path in VM xml file : /mnt/user/work) Using sshfs : (/etc/fstab : fuse IdentityFile=/home/user/.ssh/id_rsa,uid=user,gid=user,users,idmap=user,noatime,allow_other,_netdev,reconnect,exec,rw 0 0) As you can see those stats are quite poor compared to the one from unRaid directly. My knowledges regarding VM's performances are quite limited but here are the assumption I had when I did those tests for the first time (even before using the user share with cache mode to 'YES') : - I thought p9 would be the closest to 'barebone' performances but it's almost the inverse. Maybe something misconfigured ? I tried with default settings and the other options too. - I thought sshfs would be similar to 'barebone' knowing I use a virtual bridge (br0) or would at least reach what nfs and cifs can reach. After all those tests I don't really know what to do to improve the situation. Even if the speed with nfs/cifs is not bad ,the cut by +/- 2/3 in terms of performance seems quite big in my opinion. And before I used the cache mode to 'YES' for the user share, the cifs and nfs speed was reaching +/-20MB/s max. The current VM is using 30 out of 32 cores and 20GB out of 32GB of RAM. I've attached the xml setting file if you'd like to have a deeper look. Same for the diagnostics file. Thank you ! EDIT : I can confirm the same results under a Windows VM using cifs. unraid-diagnostics-20191208-1319.zip vm-settings.txt
  5. Hi, I've got exactly the same issue with my linux vm (ubuntu 18.04). Is there any news regarding this ? Can a moderator confirm linux has the same issue as osx or not? @jonp any hint ? Adding some outputs : From VM (Ubuntu 18.04.3 - linux 4.16.0-041600-generic) : From host (Linux unRaid 4.19.56-Unraid): fstab : Thank you, Geoffrey
  6. Hi, is there anybody able to shed some light on this please ? Thank you, Geoffrey
  7. Hi Snidera, I have the same problem. While the VM is working fine, any mounted shares from the array (could be nfs, sshfs or p9) are kind of capped at 35Mb/s. A test disk on that share from the server is totally fine (166Mb/s). Did you progress with that issue ? Any clue of what's happening ? Thank you, Geoffrey.