Question for 10gb users


statecowboy

Recommended Posts

Hi all.  I was curious to hear from those of you who have a 10gb connection on your network for use with unraid.  I understand ultimately that any reads from the array are limited by disk speed (unless of course you’re reading from an unassigned SSD or have all SSDs in your array).  It looks like writes to the array can benefit to a certain extent on an SSD cache drive. That said, Ive talked myself out of investing in 10gb gear a few times now.  I don’t think there’s a huge gain in performance to be had for my use case (would love to increase speeds between my server and my office pc for editing photos and general file swaps).  
 

All of that said, for those of you who have jumped in to 10gb, have you noticed a significant difference in things?  Any real world examples of what you’ve noticed that you wouldn’t mind sharing? I of course don’t NEED to go 10gb, but part of me wants to tinker/play.  I also don’t want to invest in gear and have a lot of remorse because it didn’t make much of a difference.

Link to comment

My physical disks can hit 165+MBps on reads, so when moving large video projects and photo collections off the array, it’s 50% faster. Then on the way back, it hits 400+MBps to the ssd cache, making it 3-4x faster. Eventually I’ll go raid 10 on the cache and it should hit 600MBps. Or maybe I’ll go nvme and try to saturate the connection (if I can free up a pcie slot).

 

its not really something everyone needs. If you only move 1-3GB, it’s not really worth it. But with 200GB video project files, it helps save some real time.

 

(but it is fun to play around with)

  • Like 1
Link to comment
2 hours ago, 1812 said:

My physical disks can hit 165+MBps on reads, so when moving large video projects and photo collections off the array, it’s 50% faster. Then on the way back, it hits 400+MBps to the ssd cache, making it 3-4x faster. Eventually I’ll go raid 10 on the cache and it should hit 600MBps. Or maybe I’ll go nvme and try to saturate the connection (if I can free up a pcie slot).

 

its not really something everyone needs. If you only move 1-3GB, it’s not really worth it. But with 200GB video project files, it helps save some real time.

 

(but it is fun to play around with)

If you don't mind me asking what do you use for the setup?  I have, for better or worse, invested in unifi.  I ran pfsense for a while and it was great, but I didn't feel like diving in deep when wanting to do simple things.  Unifi is just a bit easier to manage for me.  So, I currently have the UDM-Pro, the US-24-250W, and the item I am considering is the US-16-XG.  I have my server and network eqpt in my basement, so the 16 port XG would go down there.  I have an outdoor rated CAT6 running up to my office where I would need the other end of the 10gb connection, so would probably need some sort of switch there as well (I feed an AP off of that line in addition to my office workstation). 

Link to comment

I run virtualized sophos utm on my main server in my office which dumps out gigabit internet access to a mikrotik CRS305-1G-4S+in 10gbe switch (also in the office.) That then runs back to my main server via 10gbe dac to a mellanox connect-x 2, 2 dacs to my workstation/backup server to a solar flare 2 port card which is split between backup server access and a work vm with graphics output for video/photo editing. the last port on that switch has a copper transceiver which connects to a cat 6a cable (I ran 2 cat 6a drops into every room of the house  except for the office which has 4).
 

that cat 6a connection then serves as a trunk line running to a CSS326-24G-2S+RM 10gbe switch in the pantry for house lan. That has 2 10gbe ports and 24 1 gig ports. The rest of the devices in the house are 1gb connections but I went ahead and ran 10gbe to future proof, as the cost was not that much more vs cat 5e.


in a year or two, I’ll upgrade the house switch to have more 10gbe ports if needed. But for now any one connection (or several connections) on the lan can saturate their link to the server and still not bog down internet access, and vice versa. 

 

I could have  simplified a little bit by having the sophos firewall use the 10gbe connection in the server through a virtual bridge but I didn’t. For some reason I like having separated hardware for the vm without direct access to the server itself. Plus it also keeps internet traffic off that interface maximizing latency for gaming and throughout of data.

 

Link to comment
  • 2 weeks later...

I can transfer as fast as the 12TB Toshiba array drives can go, which is about 250MB/s.  

 

Server has an Intel X520 with SFP+, connected to switch using a DAC cable.  Workstation has an Asus Aquantia 10G card via CAT6.  Switch is one of those bizarre Netgear 10 port MS510TX things (2x10G, 2x2.5/5G, 2x2.5G, 4x1G).

Link to comment

I have a 10gb in my unraid server and a 10gb in my desktop, both are connected to a mikrotik switch at 10gig.

 

For regular use there is not a real big difference. Large transfers -are- quicker. But to be honoust does it really care if you wait 15 seconds are 10 seconds for a transfer to complete ? Unless it becomes nearly instantaneous it does not really make a difference.

 

It is cool to see the quick speeds though ;-)

Link to comment

I run 10GbE on two workstations, each connected directly (i.e. no switch) to one of two 10GbE cards in unRAID.

Both workstations have large NVME and RAID0 SSDs.

My cache in unRAID is three 4TB SSD drives in a btrfs RAID0 array, for a total of 12TB.  So in theory, the hardware will support 10GbE speeds.  Spinners in unRAID top out at about 200MB/sec.

 

Xfers to/from the workstations to cache are very fast (but I need to turn off realtime virus scanning on the workstation to get the absolute fastest performance) and they get within 80% of wireline.

 

Workstation backups go to cache, as well as certain datasets I need to have fast access to are kept only on cache.

 

For ransomware protection, the entire server is read only except for "incoming" data on cache.  Anything I want to copy to the server I copy it to cache first, and then manually log into the server and move it from cache to its ultimate destination on the array.

 

Cache is rsynced to a 12TB data drive (spinner) in the array periodically, after confirming data on cache is valid..

 

Link to comment
On 3/25/2020 at 7:08 PM, bubbaQ said:

I run 10GbE on two workstations, each connected directly (i.e. no switch) to one of two 10GbE cards in unRAID.

Both workstations have large NVME and RAID0 SSDs.

My cache in unRAID is three 4TB SSD drives in a btrfs RAID0 array, for a total of 12TB.  So in theory, the hardware will support 10GbE speeds.  Spinners in unRAID top out at about 200MB/sec.

 

Xfers to/from the workstations to cache are very fast (but I need to turn off realtime virus scanning on the workstation to get the absolute fastest performance) and they get within 80% of wireline.

 

Workstation backups go to cache, as well as certain datasets I need to have fast access to are kept only on cache.

 

For ransomware protection, the entire server is read only except for "incoming" data on cache.  Anything I want to copy to the server I copy it to cache first, and then manually log into the server and move it from cache to its ultimate destination on the array.

 

Cache is rsynced to a 12TB data drive (spinner) in the array periodically, after confirming data on cache is valid..

 

 

That is a very nice design.... If not for the manual work I would be willing to do the same !

 

I wonder though.. Read that we might be getting multiple cache pools.. If that is true maybe I could do something similar to your process:

 

- "Regular" array is read only;

- Incomming files get written to SSD cache (1TB cache)

- On periodic basis this gets copied automatically to cache-2 (lets say a 10TB drive that holds all shares)

- Periodically I copy the contents of share 2 manually to the regular array ..

 

That would minimise the manual work to a couple of times per year.. Would fit my usecase..

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.