Migrate media library to Unraid - smb/nfs problems


sebstrgg

Recommended Posts

Hi,

 

I've just purchased a new server with the goal to minimize my homelab footprint and changing up(down?😉) to unraid from zfs. Before I can shut down the old machines I need to migrate all my data libraries.

 

The thing is that I've ran into a few problems as I'm currently migrating my 13TB library to the new machine.

 

--

 

1. While transferring my data from my zfs share to my unraid share via smb I max out on approximately 45-55MB/s on large files. I've scoured the web and this forum for different solutions, but can't get better speeds. The smb protocol might be slow, but that's definitely not anywhere close to limits on a modern version of smb.

 

I've also tried changing max protocol setting, trying to transfer over a disk share instead of user share and all to no use.

 

2. I haven't been able to reliably mount my share via nfs. Nfs have been activated, added on the share as Public and after verifying it via exportfs -v, I have been able to mount it a few times, but every time it succeeded, it timed out shortly after. Most of the times it times out when trying to mount the share. I've tried this on Ubuntu 16.04/18.04 and Debian 9 machines that all currently have working nfs shares mounted.

 

--

 

The disks in my new server are verified for ~195MB/s read/write speeds.

 

There's nothing wrong with the physical layer, no firewalls in between my vlans limiting specific type of traffic and no port configuration on the switch that may limit throughput.

 

EDIT: I realized that I missed to add that I have downloaded and enabled TurboWrites CA as well. There is NO difference in speed of the transfer with this enabled, nor any difference in the perceived read/write speeds of the array. (Yes, I have verified that it's active)

 

Same Speed limitations occured when I tried with cache drive ON. While migrating I've not assigned a parity drive nor cache as I already have my data safe on my old setup. I just tried to switch around the setup of array to see if it could help out with speeds, but without any luck.

 

Unraid version: latest (6.7.2)

Windows 10 version: latest (x64-1903)

 

Unraid server specs:

1x Xeon Gold 5218

128GB DDR4 2400MHz ECC REG

ASUS Z11PA-D8

 

Array for migrating (w/o old disks added):

5x WD 8TB White (WD80EZAZ)

- 4x being used for store, 1x for parity

1x Intel 660p 1TB m.2 nvme

- for cache

 

Network consisting of:

- Cisco 3750G

- Pfsense

 

I'll be happy to provide any additional info needed to troubleshoot the problem.

 

Thanks

/S

Edited by sebstrgg
Grammar/clarifying
Link to comment
2 hours ago, sebstrgg said:

1. While transferring my data from my zfs share to my unraid share via smb I max out on approximately 45-55MB/s on large files. I've scoured the web and this forum for different solutions, but can't get better speeds. The smb protocol might be slow, but that's definitely not anywhere close to limits on a modern version of smb.

 

With a parity-protected array, those speeds are "normal" when transferring a lot of data  You can improve performance by enabling Turbo Write which is especially useful during large data transfers.  Whether or not you leave it on after the data transfer is up to you.  It's only downside is that all the data drives remain spun-up.  There is also a turbo write plugin which seeks to automatically enable/disable turbo write depending on how many drives are currently spun up.

 

You're never going to see 195 MB/s but some have reported 80 MB/s to 100 MB/s with turbo write enabled.

 

EDIT: I should also add that, depending on the amount of data to be transferred, some users opt to not even enable parity during a data migration until after the transfer is complete.  Without parity drive(s) in the picture, transfers will be faster.

Edited by Hoopster
Link to comment
4 hours ago, Hoopster said:

 

With a parity-protected array, those speeds are "normal" when transferring a lot of data  You can improve performance by enabling Turbo Write which is especially useful during large data transfers.  Whether or not you leave it on after the data transfer is up to you.  It's only downside is that all the data drives remain spun-up.  There is also a turbo write plugin which seeks to automatically enable/disable turbo write depending on how many drives are currently spun up.

 

You're never going to see 195 MB/s but some have reported 80 MB/s to 100 MB/s with turbo write enabled.

 

EDIT: I should also add that, depending on the amount of data to be transferred, some users opt to not even enable parity during a data migration until after the transfer is complete.  Without parity drive(s) in the picture, transfers will be faster.

Appreciate the feedback, but it seems like you missed an important part from my post:

 

"Same Speed limitations occured when I tried with cache drive ON. While migrating I've not assigned a parity drive nor cache as I already have my data safe on my old setup. I just tried to switch around the setup of array to see if it could help out with speeds, but without any luck."

 

I also realized that I missed to add that I have downloaded and enabled TurboWrites CA as well. There is NO difference in speed of the transfer with this enabled, nor any difference in the perceived read/write speeds of the array.

 

My aforementioned 195MB/s speeds were referring to read/write speeds of the individual disks, not actual transfer speeds as these needs to be looked at separately. I have seen very close numbers even in the array (since I currently don't have a parity or cache drive added).

 

I would expect transfer speeds of ~100MB/s due to bandwidth constraints as I'm utilizing a 1GbE connection between my unraid server and switch.

Edited by sebstrgg
Link to comment
1 hour ago, sebstrgg said:

I would expect transfer speeds of ~100MB/s due to bandwidth constraints as I'm utilizing a 1GbE connection between my unraid server and switch.

You never get this speed in Unraid writing to a parity protected array drive as each logical 'Write' operation in Unraid is multiple steps:

  • Read the target sector from the target drive and the parity drive(s) (in parallel).
  • Calculate the new contents of the parity drive sector(s) from the change to the target drive sector
  • Wait for the target drive and parity drive(s) to complete a disk revolution.
  • Write the new sector contents to the target drive and the parity drive(s) (in parallel).

In this mode speed is limited by the fact one always has to wait for a disk revolution as part of the logical 'Write' operation. The big advantage is that there is no need for other drives to be spun up for this operation to succeed.

 

If 'Turbo Write' mode is enabled then the algorithm is closer so that used in a parity check:

  • Read the target sector from all data drives EXCEPT the target drive (in parallel).
  • Calculate the contents of the target sector on the parity drive(s) using the data from the array drives and the new contents of the target drive.
  • Write the target sector to the parity drive(s).

The idea is to try and improve speed by eliminating the need to always wait for a full disk revolution as part of the 'write' operation.   However the speed in this mode will never exceed the speed that can be obtained in a parity check.  The big disadvantage of this mode is that it require all array drives to be spun for this operation to complete.

Link to comment
16 minutes ago, itimpi said:

You never get this speed in Unraid writing to a parity protected array drive as each logical 'Write' operation in Unraid is multiple steps:

  • Read the target sector from the target drive and the parity drive(s) (in parallel).
  • Calculate the new contents of the parity drive sector(s) from the change to the target drive sector
  • Wait for the target drive and parity drive(s) to complete a disk revolution.
  • Write the new sector contents to the target drive and the parity drive(s) (in parallel).

In this mode speed is limited by the fact one always has to wait for a disk revolution as part of the logical 'Write' operation. The big advantage is that there is no need for other drives to be spun up for this operation to succeed.

 

If 'Turbo Write' mode is enabled then the algorithm is closer so that used in a parity check:

  • Read the target sector from all data drives EXCEPT the target drive (in parallel).
  • Calculate the contents of the target sector on the parity drive(s) using the data from the array drives and the new contents of the target drive.
  • Write the target sector to the parity drive(s).

The idea is to try and improve speed by eliminating the need to always wait for a full disk revolution as part of the 'write' operation.   However the speed in this mode will never exceed the speed that can be obtained in a parity check.  The big disadvantage of this mode is that it require all array drives to be spun for this operation to complete.

 Thanks for the feedback.

 

I understand that a parity protected array won't be able to reach those speeds, but as mentioned twice now (and edited the main post) - I have NOT enabled a parity drive.

Link to comment
4 minutes ago, sebstrgg said:

 Thanks for the feedback.

 

I understand that a parity protected array won't be able to reach those speeds, but as mentioned twice now (and edited the main post) - I have NOT enabled a parity drive.

Fair enough - I assumed that you had from your mention of enabling using Turbo Write, which is not relevant without a parity drive.

 

It might be worth posting your system diagnostics zip file (obtained via Tools >> Diagnostics to see if those might suggest a reason.

Link to comment
12 minutes ago, johnnie.black said:

Getting the same speed to array and cache suggests to me a LAN problem, run iperf with a single thread to see what speed you get.

[  5] local 10.10.5.195 port 5201 connected to 10.10.5.46 port 48672
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   107 MBytes   899 Mbits/sec
[  5]   1.00-2.00   sec   112 MBytes   939 Mbits/sec
[  5]   2.00-3.00   sec   112 MBytes   939 Mbits/sec
[  5]   3.00-4.00   sec   112 MBytes   939 Mbits/sec
[  5]   4.00-5.00   sec   112 MBytes   939 Mbits/sec
[  5]   5.00-6.00   sec   112 MBytes   939 Mbits/sec
[  5]   6.00-7.00   sec   112 MBytes   939 Mbits/sec
[  5]   7.00-8.00   sec   112 MBytes   939 Mbits/sec
[  5]   8.00-9.00   sec   112 MBytes   939 Mbits/sec
[  5]   9.00-10.00  sec   112 MBytes   939 Mbits/sec
[  5]  10.00-11.00  sec   112 MBytes   939 Mbits/sec
[  5]  11.00-12.00  sec   112 MBytes   939 Mbits/sec
[  5]  12.00-13.00  sec   112 MBytes   939 Mbits/sec
[  5]  13.00-14.00  sec   112 MBytes   939 Mbits/sec
[  5]  14.00-15.00  sec   112 MBytes   939 Mbits/sec
[  5]  15.00-16.00  sec   112 MBytes   939 Mbits/sec
[  5]  15.00-16.00  sec   112 MBytes   939 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-16.00  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-16.00  sec  1.77 GBytes   949 Mbits/sec                  receiver

It's not a LAN problem.
 

 

34 minutes ago, itimpi said:

Fair enough - I assumed that you had from your mention of enabling using Turbo Write, which is not relevant without a parity drive.

 

It might be worth posting your system diagnostics zip file (obtained via Tools >> Diagnostics to see if those might suggest a reason.

 

No worries, I know I was a bit unclear - but also new to UnRaid which makes things extra confusing. I've attached the diagnostics to this post.

unraid-diagnostics-20190929-0955.zip

Link to comment
12 hours ago, sebstrgg said:

Appreciate the feedback, but it seems like you missed an important part from my post:

 

"Same Speed limitations occured when I tried with cache drive ON. While migrating I've not assigned a parity drive nor cache as I already have my data safe on my old setup. I just tried to switch around the setup of array to see if it could help out with speeds, but without any luck."

Yep, missed that somehow.  For some reason, I assumed the array specs (with parity and cache) was how it was setup for the data transfer.  Cache is commonly also disabled/not used in any transfer of data between systems as, typically, there is not nearly as much storage on a cache drive as on even a single hard drive, so it would fill up quickly.  However, if your write speeds, even to an nVME SSD, are that low something outside of the unRAID array is causing the bottleneck.

 

I see others have jumped in with some ideas and they are MUCH more knowledgeable about array/disk.network performance issues.

Link to comment
9 hours ago, johnnie.black said:

It appears not, if you can test with another source computer and/or do some write tests with dd on the array/cache.

 

15 minutes ago, Hoopster said:

Yep, missed that somehow.  For some reason, I assumed the array specs (with parity and cache) was how it was setup for the data transfer.  Cache is commonly also disabled/not used in any transfer of data between systems as, typically, there is not nearly as much storage on a cache drive as on even a single hard drive, so it would fill up quickly.  However, if your write speeds, even to an nVME SSD, are that low something outside of the unRAID array is causing the bottleneck.

 

I see others have jumped in with some ideas and they are MUCH more knowledgeable about array/disk.network performance issues.

 

Well, it seems like I had a brainfart of some sort in regards of transfer speeds and how I tried to do it previously. I was doing the smb transfers via my desktop, which limited the throughput vastly, since my computer had to send/recieve all data as the middle-man in between my servers. 

I attached my current NFS shares directly to the unraid machine with the help of the unassigned devices add-on and re-started my transfers with rsync and suddenly everything works as expected with two simultaneous streams averaging (in total) of around 100MB/s, which is expected over a single 1GbE-link.

 

I was never worried about the functionality of unRaid and wheter it would suit my needs, but it's really confusing coming from a full-blown vmware environment to a much more simplified workflow. Transfer jobs of this size is not normally done in my network as well. 🙂

 

Again, appreciate all your help - sometimes you just have to explain the problem to someone else, get a few questions back at you, to finally get your head thinking straight again.

 

 

 

 

 

Edited by sebstrgg
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.