haloeight Posted July 30, 2022 Share Posted July 30, 2022 New to UNRAID, coming from OMV, like it so far. I initially had issues getting Unraid up and running with NIC configurations. I have two onboard GBE NICs and a Mellanox 10GBE add in card in the Unraid server. I have an identical 10GBE card in my workstation to connect via a 10GBE direct attach cable. I took the Mellanox out of the bonding setup, which initially would break Unraid's ability to connect to the internet presumably because of routing rules. After multiple trials I got it to work with the two onboards GBE nics (one unused) to be in the eth0 bond, and eth2 as the 10GBE NIC interface. Both are configured via static information as DHCP didn't seem to work right. When I try to transfer over the 10GBE connection I see transfer rates drop down to 50MB/s, which means to backup a 14TB image it's estimating around 24 hours. This is slower than I was finding on a similar setup with OMV. I read in another thread that bonding slower NICs with a faster one degrades performance, but since the GBE nics are only bonded together I can't see this as contributing to the issue. I had to disable Cache because the backup image files would exceed the cache size and then not transfer to the array, so now just having things go straight to the array. I understand the rate limiter here will be the spinning disks, not network speed, but I'm not understanding why transfer rates are so much slower than on an equivalent setup with OMV. Thanks in advance nas-diagnostics-20220730-1536.zip Quote Link to comment
Kilrah Posted July 30, 2022 Share Posted July 30, 2022 What was your setup on OMV? Depending on RAID level you'd have been getting the advantage of writes scattered across multiple disks, which is not a thing on UNraid. Here you're writing the actual data to a single disk, and handling parity can slow that down further. Basically the goals are very different and write performance to the array certainly isn't the strong suit/priority on unraid, but as a counterpart you get the ability to add/remove drives with great flexibility and the bonus that even if you had multiple disk failures that exceeded the fault tolerance only the actually dead drives would lose data. In Disk settings set md_write_method to Reconstruct Writes, that will basically get you the speed of your slowest disk. Quote Link to comment
haloeight Posted July 30, 2022 Author Share Posted July 30, 2022 11 minutes ago, Kilrah said: What was your setup on OMV? Depending on RAID level you'd have been getting the advantage of writes scattered across multiple disks, which is not a thing on UNraid. Here you're writing the actual data to a single disk, and handling parity can slow that down further. Basically the goals are very different and write performance to the array certainly isn't the strong suit/priority on unraid, but as a counterpart you get the ability to add/remove drives with great flexibility and the bonus that even if you had multiple disk failures that exceeded the fault tolerance only the actually dead drives would lose data. In Disk settings set md_write_method to Reconstruct Writes, that will basically get you the speed of your slowest disk. Thank you for the response. I was not using RAID in OMV so I think the improved transfer speed was more just purely on network settings. I will try that setting and see how it fares. The other thing I want to try and sort out is how to properly use the cache drives and not hit the overload limit - I just changed the minimum free space setting as recommended in a bunch of threads here. Quote Link to comment
Kilrah Posted July 30, 2022 Share Posted July 30, 2022 Do change the write method first, having it set to default will limit you to less than half the speed of your parity drive so 50MB/s isn't out of line for that being the limiting factor. Quote Link to comment
haloeight Posted July 30, 2022 Author Share Posted July 30, 2022 (edited) 1 hour ago, Kilrah said: Do change the write method first, having it set to default will limit you to less than half the speed of your parity drive so 50MB/s isn't out of line for that being the limiting factor. Thank you. I turned that on and am seeing transfer rates improving over what I was getting before - fluctuates a bit but closer to 200 MB/s than 50 MB/s, stabilizing around 130MB/s Edited July 30, 2022 by haloeight Quote Link to comment
haloeight Posted July 31, 2022 Author Share Posted July 31, 2022 (edited) I just looked at the Windows side of the 10GBE connection and the NIC shows link speed at 1410 Mbps, while the network properties for the 10GBE network shows the full speed. In the UNRAID dashboard the 10GBE nic there shows a 10000 Mbps link. Is there some setting to ensure things are operating at full speed? Edited July 31, 2022 by haloeight Quote Link to comment
Kilrah Posted July 31, 2022 Share Posted July 31, 2022 Are these connected directly? Could be a cable issue... but didn't know it was even possible to have a link speed that's some weird number other than 1/2.5/5/10... Quote Link to comment
JorgeB Posted July 31, 2022 Share Posted July 31, 2022 Start by running a single stream iperf test to check network throughput. Quote Link to comment
haloeight Posted July 31, 2022 Author Share Posted July 31, 2022 10 hours ago, JorgeB said: Start by running a single stream iperf test to check network throughput. Thanks. Here is are the results, first NIC 192.168.1.2 is the 10GBE, second is a bonded 802.3ad 2xGBE connection. 14 hours ago, Kilrah said: Are these connected directly? Could be a cable issue... but didn't know it was even possible to have a link speed that's some weird number other than 1/2.5/5/10... Yes, the 10GBE connection consists of two identical Mellanox ConnectX-3 NICs connected with a 10G SFP+ Direct-Attach Cable I suppose that the 130MB/s number I'm seeing for the backups is the limit of the spinning disks in the UNRAID array. 1 Quote Link to comment
JorgeB Posted August 1, 2022 Share Posted August 1, 2022 11 hours ago, haloeight said: I suppose that the 130MB/s number I'm seeing for the backups is the limit of the spinning disks in the UNRAID array. Most likely, network bandwidth looks good, you can get better speeds transferring to a cache pool with one or more fast flash devices. 1 Quote Link to comment
haloeight Posted August 2, 2022 Author Share Posted August 2, 2022 15 hours ago, JorgeB said: Most likely, network bandwidth looks good, you can get better speeds transferring to a cache pool with one or more fast flash devices. Thank you. The problem I'm finding with the cache pool is that while it produces better speeds (~250 MBs) I can't seem to get the right settings to handle the fact that the backup images end up exceeding the available space on the cache. When the image backup starts in Macrium Reflect, it sees the total space available in the array (~16TB). Then proceeds because it thinks there is enough space. But the drive images end up being larger than the available cache storage. This is even when setting minimum free space on the Cache Pool and Share to stop short of maxing out the cache drives. But it doesn't stop the problem. Right now I'm avoiding the use of cache and backing up individual folders instead of whole drives to get around the issue. Quote Link to comment
Kilrah Posted August 2, 2022 Share Posted August 2, 2022 For images you could set Macrium Reflect to split the backups to a size smaller than your "min free space" setting in the advanced options. The decision of where to put each file is taken at file creation so when Reflect starts creating the next split it would take place and be redirected if needed. Quote Link to comment
ChatNoir Posted August 2, 2022 Share Posted August 2, 2022 And also adjust the pool's minimum free space so that it's larger than the maximum file you expect to write to the pool. Some say twice the size in order to have some room left as file systems often don't react well to be filled completely. Quote Link to comment
haloeight Posted August 2, 2022 Author Share Posted August 2, 2022 6 hours ago, Kilrah said: For images you could set Macrium Reflect to split the backups to a size smaller than your "min free space" setting in the advanced options. The decision of where to put each file is taken at file creation so when Reflect starts creating the next split it would take place and be redirected if needed. Thanks. I was using this approach but the problem is then that incremental backups can't be done, resulting in having to do a full backup every time which is tedious for multi terabyte backup images. I'm getting around this by just doing a simple file & folder backup, which is smaller and easier to manage. 6 hours ago, ChatNoir said: And also adjust the pool's minimum free space so that it's larger than the maximum file you expect to write to the pool. Some say twice the size in order to have some room left as file systems often don't react well to be filled completely. Thanks. I did this but the issue is that the whole disk image files are much larger than the cache pool size. Then Macrium reflect looks at the total storage in the array, starts the job, then chokes at the end after 20+ hours because it runs into the cache pool limitation. The ideal would be being able to have cache turned on, max file size limits set, and Macrium reading the cache limitation then backing up straight to the array rather than needing to turn cache off, but this doesn't seem to work. Quote Link to comment
Kilrah Posted August 2, 2022 Share Posted August 2, 2022 There isn't really going to be a solution other than setting cache:no on the specific share you use for those backups if they're always going to be too big for the cache. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.