PyCoder Posted November 3, 2021 Share Posted November 3, 2021 (edited) Hi guys! I was setting up a new Unraid but this time I went from ZFS zu the "normal" Unraid setup and now my problem: The smb transfer speed is sloooooooooooooooooow really sloooooooooow in avg 15 MB/s?! Is there anyway to fix that? I mean the drives should make at least 80 MB/s It's the same Hardware and same Unraid I only removed the ZFS pool and created a Unraid-Array. If I switch back to the ZFS pool I have around 350MB/s. I know that the ZFS pool is faster thanks to its "traditional" raid setup and Unraid is as slow as the HDD but 15 MB/s? Can someone help? Edited November 4, 2021 by PyCoder solved Quote Link to comment
Frank1940 Posted November 3, 2021 Share Posted November 3, 2021 (edited) Are you using a cache drive for the array? Writes directly to the array are much slower than reads. Second (and major) factor is the file sizes being transferred. Writing directly to array is very slow because of file creation overhead, and the extra reads and writes necessary to keep parity updated on a real time basis. There are also two methods to (Settings >>> Disk Settings > 'Tunable (md_write_method):') Using the "reconstruct write" is faster but it will spin up all of the disks in the array as opposed to the parity disk(s) and one data disk. It looks like you are doing an initial data load of a new server setup. If you have a good backup of all the data you are transferring, you could unassign the parity disk-- leaving the aray unprotected. But that would at least double the transfer speed. (When you assigned the parity disk after the data is loaded, a parity build will be required.) One more observation. Small capacity drives have slower speeds than large capacity drives because of the higher data density of the large capacity drives. Edited November 3, 2021 by Frank1940 Quote Link to comment
PyCoder Posted November 3, 2021 Author Share Posted November 3, 2021 17 minutes ago, Frank1940 said: Are you using a cache drive for the array? Writes directly to the array are much slower than reads. Second (and major) factor is the file sizes being transferred. Writing directly to array is very slow because of file creation overhead, and the extra reads and writes necessary to keep parity updated on a real time basis. There are also two methods to (Settings >>> Disk Settings > 'Tunable (md_write_method):') Using the "reconstruct write" is faster but it will spin up all of the disks in the array as opposed to the parity disk(s) and one data disk. It looks like you are doing an initial data load of a new server setup. If you have a good backup of all the data you are transferring, you could unassign the parity disk-- leaving the aray unprotected. But that would at least double the transfer speed. (When you assigned the parity disk after the data is loaded, a parity build will be required.) One more observation. Small capacity drives have slower speeds than large capacity drives because of the higher data density of the large capacity drives. Hi, No I dont use any cache drives and i tested md_write_method without any success. I know that the array is slow as slow as the HDD but 15 MB/s? Even when i copy a file from my PC to my Laptop i have at least 50MB/s soooooooo something must be off. And this is now with ZFS. Yes is stripped and faster but 15 MB/s isn't normal. Quote Link to comment
Frank1940 Posted November 3, 2021 Share Posted November 3, 2021 I looked at your screenshot and the server had been up for 33 minutes. You were copying 504Gb of data. Start that transfer again, put the Windows Explorer transfer screen on top, walk away from the computer and just let it complete. Come back every fifteen minutes and just look at the progress. Touch nothing but the screen with your eyes. IF you need something to do while waiting, you can read this post on the Laptop (the computer not doing the transfer..): https://forums.unraid.net/topic/50397-turbo-write/ Quote Link to comment
PyCoder Posted November 4, 2021 Author Share Posted November 4, 2021 (edited) I played around with scheduler and md_write_method. Btw if all the disks have to spin then I can just stick to ZFS. 115 MB/s Problem solved Edited November 4, 2021 by PyCoder Quote Link to comment
Andiroo2 Posted November 7, 2021 Share Posted November 7, 2021 On 11/4/2021 at 5:25 AM, PyCoder said: I played around with scheduler and md_write_method. Btw if all the disks have to spin then I can just stick to ZFS. 115 MB/s Problem solved This is where a cache disk will be helpful. Large capacity SSD or ideally NVME used as a write cache will speed up your writes as fast as your network and sending disks can handle. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.