peace-keeping-villa8590 Posted August 4 Share Posted August 4 I am currently trying to copy aprox. 16TB of data to my truenas VM and it is terribly slow. I currently have the TrueNAS share mounted via SMB and I'm trying to copy all of my data from my "zfs_storage" pool to a different pool within TrueNAS so I can recreate the "zfs_storage" pool in TrueNas. Using the Disk Speed docker, I speed tested one of my 9 12GB SAS SSD's (pic attached) that make up my "zfs" pool but it only got ~45MB/s. This is terrible IMO. I did the same test on one of my "zfs_storage" pool drives (seagate EXOS 18TB) and got ~250MB/s peak and a little over 100MB/s towards the end of the test (pic attached). All my SAS Drives are on my Broadcom LSISAS9400-16i and all SATA drives are on my LSISAS9300-16i. My transfer speeds are frequently less than 15MB/s but peak around 60MB/s. The data I'm moving is on the "zfs_storage" 18TB spinning drives that tested at over 100MB so I would expect the data to transfer faster than that. I also tried mounting TrueNAS as a NFS share but that was the same in terms of speed. tower-diagnostics-20240804-1335.zip Quote Link to comment
JorgeB Posted August 4 Share Posted August 4 Also do the controller test with diskspeed and post the results, the one that looks like this: Quote Link to comment
peace-keeping-villa8590 Posted August 4 Author Share Posted August 4 19 minutes ago, JorgeB said: Also do the controller test with diskspeed and post the results, the one that looks like this: I wasn't aware there was a test for the HBA but here are the results Quote Link to comment
peace-keeping-villa8590 Posted August 5 Author Share Posted August 5 After looking at the results above, I decided to use sdparm --page=ca "drive" and compare the results I'm not sure what the "def" column is but I noticed that they are different. The drive "zfs" which is slow is drive label "sdb" and "zfs7" is drive label "sdc". Quote Link to comment
JorgeB Posted August 5 Share Posted August 5 Well, doesn't look like a controller problem, if it were just writes could be write cache disabled, but it's showing slow reads as well, so not sure. Quote Link to comment
peace-keeping-villa8590 Posted August 6 Author Share Posted August 6 18 hours ago, JorgeB said: Well, doesn't look like a controller problem, if it were just writes could be write cache disabled, but it's showing slow reads as well, so not sure. Could it be the drives themselves? Nothing I can see the SMART data is a red flag but my knowledge is limited. Quote Link to comment
peace-keeping-villa8590 Posted August 6 Author Share Posted August 6 3 hours ago, JorgeB said: It's possible, though strange I'm trying to upgrade the firmware on my LSI cards and go from there. Quote Link to comment
peace-keeping-villa8590 Posted August 10 Author Share Posted August 10 Is there a way to to see if the drives are causing the issue? Quote Link to comment
peace-keeping-villa8590 Posted August 10 Author Share Posted August 10 So today I looked at all the benchmarks of all the drives and recreated my zfs pool with only the drives not giving issues. Afterwards I pulled all the drives that were slow and realized all of them are the same model drive, "MZILS3T8HCJM" seven of the nine SSD's that made up my zfs pool were this mode of drive. The other two drives are a different model and don't have any performance issues. Quote Link to comment
JorgeB Posted August 10 Share Posted August 10 Could be some issue with those specific drives. Quote Link to comment
peace-keeping-villa8590 Posted August 10 Author Share Posted August 10 The two drives that currently make up my zfs pool and are not slow turn out to be the same model (MZILS3T8HCJM) drive but show up as NETAPP..... This leads me to believe that the issue is a firmware issue on the the drives that im having issues with. Thoughts? Does anyone know if it's possible to to flash the drives to a different firmware? Quote Link to comment
peace-keeping-villa8590 Posted August 11 Author Share Posted August 11 I attached two slow drives at random to a windows VM and ran crystal disk mark on them and the here are the results. Does this mean my issue is coming from Unraid itself? Windows can write to the same disks many times faster than Unraid. Unraid writes at 49MB/s and windows writes at 6,600+MB/s? Quote Link to comment
peace-keeping-villa8590 Posted August 12 Author Share Posted August 12 I passed the HBA to the windows VM and tested a few drives. Results are closer to what I expected and what I would expect Unraid to be able to do. It just seems that Unraid is missing something to be able to use the drives natively. Can someone chime in on what I should do? Quote Link to comment
JorgeB Posted August 12 Share Posted August 12 Sorry, I don't remember ever seeing a similar issue, especially with also slow reads, not just writes. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.