Jump to content

SMB copy drops after couple of GB


Ruuddie

Recommended Posts

Hi,

 

I have recently installed a new Unraid 6.9.2 server, with one 5TB parity disk and two 3TB storage disks into an array. No SSD caching at the moment. The array is set up using the default xfs, I haven't switched to btrfs because I don't need any of the new features and prefer full stability and proven technology.

It's a Pentium G6500 with 16GB RAM. The disks are connected to a Dell PCIe card flashed with LSI SAS2008 IT firmware, acting as HBA.

 

When I copy a large file via SMB (Edit: I mean, I copy a file from my PC to the Unraid server), I notice the transferspeed drops from the full 112 MB/s back to 40-ish after a couple of GB. I have seen some other topics about this issue, but I don't think they were conclusive in their solution. Some suggest the disks are faulty, but before I installed Unraid, they worked flawlessly (tested with both VMware and Windows Server). Adding SSD cache might work, but I figure a normal HDD should easily do 100 MB/s without SSD cache in between.

 

I installed netdata to try and get some insight into what's happening. I attached screenshots of this. The first screenshot shows the system both read and writes at around 80MB/s at the same time. On the second and third screenshot, you see both read and writes 40MB/s to both sdb and sdc. sdb is the 5TB parity disk, sdc is one of the 3TB data disks.

So if I interpret this correctly, it seems to:

- Write the file to the parity and data disk at the same time (good, redundancy!)

- Do this by reading the file after it has been written, instead of doing this in memory

 

Is this an Unraid issue or an xfs issue? I think it should calculate the parity while the file is in memory. Re-reading a file that has just been written seems very inefficient, and is obviously very taxing.

 

Regards,

Ruud

2022-02-04 10_38_48-Window.png

2022-02-04 10_51_45-unraid netdata dashboard — Mozilla Firefox.png

2022-02-04 10_51_34-unraid netdata dashboard — Mozilla Firefox.png

Edited by Ruuddie
Link to comment

The initial burst you're seeing at line speed is when the writes are being cached in RAM.  Once that fills, then the writes have to happen directly to the drive.

 

Unraid has 2 writes modes:

  1. Read/Write/Modify - This is the default, allows drives not being involved in the writes to spin down so that only the parity drive and the disk in question are written to.  It is also the slowest since it has to read the contents of parity and the data disk, recalculate what the contents will be, wait for the drive to spin back around to the applicable sectors and then write the information.  IE: It's 2-3x slower than simply writing the info to the drive
  2. Reconstruct write: This basically writes the info directly to the data drive and the parity drive simultaneously but has to read from every other drive at the same time in order to get the proper parity information.  It will be the fastest (close to the write speed (or read speed of the slowest drive) but the caveat is that every drive has to be spun up for any write to the array.

Cache drives solve the problem because the writes can be cached to it and they're not involved in the parity system so will basically proceed at full line speed and then get moved to the parity protected array usually during off hours.

 

The 40MB/s is at the low end of average, but not an unacceptable number.

  • Thanks 1
Link to comment

Thanks a lot for the clear answer! I had disabled cache because it gave issues when I moved all my data from Synology to Unraid (SSD is 250GB and mover only runs every-so-often so the SSD was full and the copy job stopped). I guess I can just reenable it after the move is done, since I won't be moving more than 250GB anyway.

Link to comment
30 minutes ago, Ruuddie said:

disabled cache because it gave issues when I moved all my data from Synology to Unraid (SSD is 250GB and mover only runs every-so-often so the SSD was full and the copy job stopped). I guess I can just reenable it after the move is done, since I won't be moving more than 250GB anyway.

Best if you don't use cache when writing more than cache can hold at one time. Often people will even do the initial data load without parity since that is faster, then build parity when done.

 

Do you have good (enough) backup? You must always have another copy of anything important and irreplaceable. Parity is not a substitute.

Link to comment

Thanks for your help guys! I agree, RAID/parity is never a backup solution. The data I have on here is disposable-ish; I wouldn't like to lose it (hence parity/raid), but my documents, pictures, etc are backed up to the cloud using OneDrive 🙂

 

I have added two NVME 250GB SSD's to the cache pool. I do reach the full 1Gbit speed fulltime now, so yay there!

My first plan was to add a SATA 250GB SSD as cache, one NVME 250GB SSD to 'ssd-pool-1' and another NVME 250GB SSD to 'ssd-pool-2' for VM's. There would be no redundancy, but optimal use of the storage available by my small SSD's. After realizing I'll need caching for proper performance, and knowing the cache is used to store data until the mover is invoked, I feel like I have to have redundant cache as well. And I wouldn't make sense to use one SATA SSD and one NVME SSD in the same cache pool, because of the large speed difference.

 

Is my reasoning a bit ok here? Or do you have other idea's/insights?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...