Xcage Posted October 2, 2023 Share Posted October 2, 2023 (edited) Heya ppl, ive 5 servers now at different places for different purposes , and they all , including this one are running fine and doing what they are supposed to do, but there is one giant issue with this new server i built , its 5 NVME and 1 SSD as unassigned - diags attached. The issue is slow write speeds , more specifically its VM backup , i am using the plugin , all it actually does is runs cp command , but it doesnt matter if i run cp command manually (which i did before i got diag file so i capture the file while the slow transfer is happening) what happens on other servers with nvme drives where my VM file is on Cache nvme drive (btrfs) and the copy destination is array(2,3 nvme drives) is that speeds are never lower than maybe 200MBps on array copy with parity , or like 1GBps if its just nvme drive with no parity, but here the speeds are 10-15MBps , and i have no idea why, everything seems to be ok. its worth mentioning that the VM.img is raw and was installed from scratch , no wonky copying from other hypervisiors , the only difference is VirtIO drivers and qemu agent (here its newer version) , but its tricky to reinstall those now , since VM is in production now and i cant really stop it , i might have opportunity to do that next month. Also if i do: cp /mnt/cacheshare/vmfolder/vm.img justabackup.img its fast , as fast as nvme writing should be. but any other disk as destination its crawling. Please enlighten me. Edit: i do have this bcz of samsung 980 firmware issue that reports wrong temperatures untill i get a chance to update the firmware on those disks append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 but then again , i have another server with that line bcz there are samsung 980s there too , and the performance there is fine. newtechsrv-diagnostics-20231002-0704.zip Edited March 20 by Xcage Quote Link to comment
JorgeB Posted March 21 Share Posted March 21 First thing I would recommend is to set up the NVMe devices assigned to the array as a zfs raidz pool, much better for performance and trim is supported. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.