Jump to content
LAST CALL on the Unraid Summer Sale! 😎 ⌛ ×

6.3.3 - Fastest way to transfer USB3 raid 5 to unraid disks.


Magoogle

Recommended Posts

Currently using Putty I am using cp -r * /mnt/user/Media while in the dir of /mnt/disks/(externaldiskname)

 

Any faster way? The external drive is connected to a PCIe 1x USB 3 card.

 

Also why is my parity drive "rebuilding" on a fresh array of brand new drives.

Link to comment

I canceled the transfer and allowed parity to finish.

 

Using rsync to copy data from /mnt/disks/usb3raid5array to /mnt/user/Media I am seeing about 10-15MB/s

 

This seems slow? Or is this where it should be?

 

Specs:

Xeon L5630 @ 2.13GHz

4GB DDR3 Ram
Dell H310 Perc controller 6GB/s

8x 4TB 7200RPM Sata drives (on H310 controller)

2x 120GB SSD's cache (on motherboard sata)

CX750 PSU

4U Server case

10GB SFP Nic (for VM's on primary server)

1GB internal Nic (for accessing GUI)

Link to comment

That is slow.  However lot of factors from the USB drive, 5400 rpm, 7200 rpm, hybrid ssd/hdd, SSD, large files, small files, all affect the transfer performance.  While the system is in a parity rebuild all system performance will be perceived as sluggish. 

Link to comment

I canceled all transfers and let Parity build. It averaged about 120MB/s

 

After parity I started transfering files.

 

The Raid 5 is 4x 7200RPM nas drives in a USB3 Mediasonic Pro Raid box.

This is connected to the unraid server by a PCIe1x USB3 card

 

Edit:

Also its currently transfering large video files. anything from 2GB to 30GB.

Link to comment

I found that Cache was not enabled for the share I am transfering to.

 

After enabling cache I am seeing transfers from 40MB/s to 130MB/s

 

However it is only using 1 SSD cache. I have 2 SSD's on cache.

 

I am thinking of setting the 2 ssds in raid 0 and making a single cache drive.

 

Link to comment
On 4/19/2017 at 2:41 PM, Magoogle said:

However it is only using 1 SSD cache. I have 2 SSD's on cache.

 

I am thinking of setting the 2 ssds in raid 0 and making a single cache drive.

I agree you don't want to use cache for a large initial transfer, but the bit I quoted above doesn't really make sense. If you were using the default btrfs raid1 cache pool then both drives were being used, but the 2nd SSD is a mirror of the first.

Link to comment
14 hours ago, trurl said:

I agree you don't want to use cache for a large initial transfer, but the bit I quoted above doesn't really make sense. If you were using the default btrfs raid1 cache pool then both drives were being used, but the 2nd SSD is a mirror of the first.

 

Interesting.. 

 

Would I not gain anything with 240GB raid 0 cache vs a mirrored 120GB cache?

Link to comment
3 hours ago, Magoogle said:

But at the same time.. Single SSD already out preforms the HDD. So Raid 0 would just continue the bottle neck.. 

 

That's not the point of the cache, cache was first added to unrAID so you can write (and read) to the server much faster than directly to the array.

Link to comment
Didnt think about cache needing redundancy.. 
 
Very well. I will leave it this way. Not like it matters.. I dont download 120GB a night.. usually 30 40 GB
 


Hell, I only run my mover once a week
The upside of using 500gb SSDs in cache I guess.


Sent from my iPhone using Tapatalk
Link to comment

The caching of writes does not speed writes to the array. It speeds writes to the server. The data is not protected until the mover runs. If you think about it, the time it takes to write a file to cache, added to the time it takes to later move the file from cache to array is LONGER than the time to just copy it to the array in the first place.

 

Many here do not cache writes, preferring a slower transfer into the immediately protected state in the array.

 

Creating a raid1 BTRFS cache is a means of maintaining a redundant cache which would protect the cache from a drive failure. Although BTRFS is not for everyone, and I could argue that the cost of a second cache drive for such little redundancy is high.

 

I prefer my redundancy in the array. Unless I am drumming your fingers waiting for a transfer to complete (which is exceedingly rare), I don't much care if a copy takes more time. Especially knowing my data is protected immediately. And if in a hurry, you can engage turbo write which will substantially improve write performance to the array. But it spins up the entire array in the process. For an initial load, it is a big time saver.

 

Leaving data on an unprotected cache for days seems totally wrong if the goal is providing protection from data loss.

Link to comment

My cache is underpinned by 2 500GB SSD's in a hardware raid, as I had bad experiences with BTRFS.

The idea is that I can deal with the cost of the SSD for the trade off that my transcoding and post scripts that occur are occurring all in flash, and that the shows etc that I want to watch are immediately available on always spun up cache drive.

This in turn gives me a power saving as to not have to constant spin up a myriad of drives to complete a tiny task.

I prefer my redundancy everywhere. And I also keep backups of everything important on a separate array and offsite.

That being said, when I migrated my production data to Unraid I used rsync to the user0 share as to avoid the cache disk. Simpler method if you ask me


Sent from my iPhone using Tapatalk

Link to comment
I only use cache for both my 10GbE servers, as it's the only way to get 1GB/s transfers, all my other server's are cacheless, not really needed with gigabit and turbo write.


I'm guessing you don't use the Docker/VM Engine on those cacheless servers?




Sent from my iPhone using Tapatalk
Link to comment
39 minutes ago, bjp999 said:

Many here do not cache writes, preferring a slower transfer into the immediately protected state in the array.

Practically all writes to my server are unattended, taking the form of either queued downloads, or scheduled backups from other computers. Any "live" data is on the other computers, backed up nightly. Since nobody is waiting on the writes, no reason to cache them.

 

I have the usual app related stuff on SSD cache, I have a cache-only share with a copy of some of our photos so they can be read for screensavers/wallpapers without spinning up anything.

 

Lots of ways to use cache besides caching writes. Lots of different ways to use unRAID.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...