Just asking for validation of my config / usage - with 2 questions at end.


Recommended Posts

Hi,

 

I had the trial in use, then extended while I read all sort of stuff and watched all sorts of videos and I am pretty happy with everything I have running / working now.

1 have 4 x 4TB drives in the array (plus parity), 1 x 1TB SSD as cache, 1 x 4TB unassigned devies (same NAS type disk as the other 4), 1 x 3TB disk SSD for go backups. 

I have emby, plex,  sonarr, delugevpn, krusader, letsencrypt/nginx, zoneminder and tvheadend all installed, configured and doing their usual stuff.

I have various plugins - ca, ca auto update, ca backup/restore, ca cleanup appdata, all the dynamix system related tools, fix common problems, nerd tools, preclear disks, rclone-beta, unassigned devices (question below relates to that), unbalance and user scripts.

All tools are working and running fine.

I have my work related VM's installed on a 1TB SSD drive I defined as the cache drive (question below on that).  These VM's total 600GB disk space - leaving 400 free on the cache disk.

I have a load of user shared, some to support eh above (downloads on the cache drive, media shares etc).

I have user scripts created for rclone to send some important stuff to googledrive

I have user scripts there to do rsync's to the USB attached unassigned drive and the a 4TB internal unassigned drive on a schedule.

 

So - question 1 - the cache disk :

All of my shares, specially the data share are larger than the SSD has free disk space so I am unable to use the cache disk as a 'cache disk'.  I think when I went hell for leather on this build at the beginning, thinking I knew everything, that I did not really know how the cache disk worked and thinking cache needed to be as fast as possible, that's why I put the SSD in as that.  Some of my file transfers are larger than the free space on the SSD so I am pretty much stuffed there - there is never any 'moving' for the 'mover' to do.  I created a 4TB NAS disk in the server as an unassigned device - and the intention is this is the warm spare for the array.  Reading more it seems maybe I should make this 4TB NAS disk the cache disk, then I can point my data shares to that, activate the mover.  But what about the VM's, they need SSD - so I assume I move my VM's images to the 1TB SSD as an unassigned drive ?  Can someone advise on that.  And what about the dockers ? They are on the SSD / Cache disk now ?

 

So - Question 2 - Jeez, I typed so much on Q1 I cant remember what question 2 was....... oh well!

 

Thanks in advance, and what a great product.

Edited by vw-kombi
Link to comment

1:  I don't run my vm's on my system cache disk as they can get interrupted by dockers and their activity and large file transfers to the drive. Yes you could put your 4tb disk as a cache disk, but it's only going to have the performance of that single spinning disk. Depending on your dockers, they may be just fine residing on a spinning disk. What many people have been doing is have 1 ssd (or a mirrored pair) for cache, utilized as array write cache and docker usage, and a second (or in my case 2nd, 3rd, and 4th) ssd mounted via unassigned devices holding vm images.

 

2. ?

Link to comment

Thanks @1892.

 

So I am thinking this for ease of config changes :

 

Keep the 1TB SSD for the cache drive, so it has faster access for dockers (about 100GB in size) leaving 900GB for the cache once I change the data share to use the cache and daily the mover operation.  I use the CA appdata backup plugin for protection and dont need real time mirror protection for this.

Buy a 2TB SSD to add as an unassigned device and move my VM images to this, then update the xml's for the location on there instead.

Delete the VMs share I had for the VM inages that were on the cache drive - not needed.

Keep the 4TB unassigned device as the backup / hotspare in there.

 

With the above, I feel I will have 900GB SSD for caching the data share - I will make sure any updates are <900GB so as to not affect the dockers.  Dockers can still use the host for fast read/write to the SSD, and my transfers to it can use full 1Gbps wire speed when writing to the unraid server (and mover does its job overnight).  If I moved the 4TB drive to cache, then the dockers would only get mechanical disk i/o speed so not as good I feel. 

 

In the future I hope to add a dual teaming nic to both unraid and my windows PC meaning twice the write speed to the cache, but still leaving I/O's spare for the dockers to use on that SSD.

 

Sound OK ?

 

Link to comment
1 hour ago, vw-kombi said:

Thanks @1892.

 

So I am thinking this for ease of config changes :

 

Keep the 1TB SSD for the cache drive, so it has faster access for dockers (about 100GB in size) leaving 900GB for the cache once I change the data share to use the cache and daily the mover operation.  I use the CA appdata backup plugin for protection and dont need real time mirror protection for this.

Buy a 2TB SSD to add as an unassigned device and move my VM images to this, then update the xml's for the location on there instead.

Delete the VMs share I had for the VM inages that were on the cache drive - not needed.

Keep the 4TB unassigned device as the backup / hotspare in there.

 

With the above, I feel I will have 900GB SSD for caching the data share - I will make sure any updates are <900GB so as to not affect the dockers.  Dockers can still use the host for fast read/write to the SSD, and my transfers to it can use full 1Gbps wire speed when writing to the unraid server (and mover does its job overnight).  If I moved the 4TB drive to cache, then the dockers would only get mechanical disk i/o speed so not as good I feel. 

 

In the future I hope to add a dual teaming nic to both unraid and my windows PC meaning twice the write speed to the cache, but still leaving I/O's spare for the dockers to use on that SSD.

 

Sound OK ?

 

sounds like a plan!

 

in the future, if you ever go 10gbe, you can setup cache disks in raid 1 or 10 for even faster throughput.  10gbe direct connections can be done pretty cheaply now and in some ways easer than a dual nic setup.

Link to comment
33 minutes ago, 1812 said:

sounds like a plan!

 

in the future, if you ever go 10gbe, you can setup cache disks in raid 1 or 10 for even faster throughput.  10gbe direct connections can be done pretty cheaply now and in some ways easer than a dual nic setup.

 

I know cheaply with PC to server direct (as per spaceinvaders video), but I have many rooms in between and no easy cabling options.  I do however have two ethernet points which was a bit of foresight at least.....

 

Link to comment
3 minutes ago, vw-kombi said:

 

I know cheaply with PC to server direct (as per spaceinvaders video), but I have many rooms in between and no easy cabling options.  I do however have two ethernet points which was a bit of foresight at least.....

 

 

I ran cat 6a into all my rooms a couple months ago. Just waiting on the rest of the networking components  prices to drop! 

Link to comment
2 minutes ago, vw-kombi said:

7 years ago - I have no idea if it is cat5 or cat6

7 years, pretty much guaranteed cat5 or cat5e

 

I seriously doubt an electrical contractor would have ponied up for the extra cost of cat6 or the extra hassle of proper termination that long ago.

I don't remember what year cat6 became popularly available for not much extra money, but it's not been that long.

Link to comment
29 minutes ago, jonathanm said:

I seriously doubt an electrical contractor would have ponied up for the extra cost of cat6 or the extra hassle of proper termination that long ago.

I don't remember what year cat6 became popularly available for not much extra money, but it's not been that long.

 

Yeah - may be hard to find out fromt he builder also.  I did some googling :

 

Cat5E cable (which stands for “Cat5 Enhanced”) became the standard cable about 15 years ago and offers significantly improved performance over the old Cat5 cable, including up to 10 times faster speeds and a significantly greater ability to traverse distances without being impacted by crosstalk.

Cat6 cables have been around for only a few years less than Cat5E cables. However, they have primarily been used as the backbone to networks, instead of being run to workstations themselves. The reason for this (beyond cost) is the fact that, while Cat6 cables can handle up to 10 Gigabits of data, that bandwidth is limited to 164 feet — anything beyond that will rapidly decay to only 1 Gigabit (the same as Cat5E).

Cat6A is the newest iteration and utilizes an exceptionally thick plastic casing that helps further reduce crosstalk. The biggest distinguishing difference between Cat6 and Cat6A cables is that Cat6A can maintain 10 Gigabit speeds for the full 328 feet of Ethernet cable.

 

so - firstly, I can pull off the patch panel, maybe pull some cable out and see what is printed on it.  I have about 8m of cable (2 runs) between my main PC and the patch panel - 2 story house so I dont want to risk pulling through a new cable and losing one.....  I guess the cable can be re-terminated on both ends if termination is the issue.  A bit of a pity I dont have the cards to do a test on the distance etc.

 

On ebay, they have these testers - Data Networking LAN RJ45 CAT5e CAT6e RJ11 PC Ethernet Cable Tester Wire Tool

I dont mind dropping 10 bucks on one of those - reckon that will tell me anything ?

 

 

 

Link to comment
8 hours ago, vw-kombi said:

On ebay, they have these testers - Data Networking LAN RJ45 CAT5e CAT6e RJ11 PC Ethernet Cable Tester Wire Tool

I dont mind dropping 10 bucks on one of those - reckon that will tell me anything ?

Close to useless. I say close, because it will only tell you if there was gross negligence where a wire was completely severed mid run, and/or so poorly terminated that one or more of the wires isn't connected.

 

You have to spend significantly more on a cable tester to get real data, or just hook up some gigabit ethernet cards and see if they negotiate to their highest rate. Current ethernet cards are quite good at figuring out just how good a cable is.

 

I don't have any experience with 10gbe copper, but I assume the cards are just as good at testing the connection and maximizing cable throughput. So, you could probably get a couple $100 ebay NIC's and temporarily use 2 computers as guinea pigs to determine if any or all of your cable runs will negotiate better than 1Gb so you know whether to invest in a switch. If no joy, then just resell the cards and get back most of your money.

Link to comment

The cheap cable testers you can find just verifies that there is an electrical connection and that the wires are connected to the other end - it will not be able to see if the wrong color strand is used (i.e. if the twisted pairs in the cable are correctly used) or measure the impedances and capacitances to figure out the quality of the cable - no pulses are sent through the cable.

 

Some motherboards have test functionality for the internal NIC - but only to verify that the individual wires are connected as they should or to report distance to short or break for a cable pair. But this is still just good to check for a broken cable. It doesn't measure the cable quality. 10gbe really isn't very forgiving.

Link to comment

Thanks for info.

I have no actual cable issues, all cables working fine for 1Gbps ethernet, so just need to find out if they can do 10Gbe.

So the devices are no good for that.

So - get a cheap set of cards and use two PC;s to test.

 

QQ - do these 10Gbe cards (melonix) etc) also auto negotiate down ?  I was thinking that way I can add it to the unraid server regardless, then it can still be used on the old hub until the tech around it catches up if my tests fail. 

Link to comment
10 minutes ago, vw-kombi said:

all cables working fine for 1Gbps ethernet, so just need to find out if they can do 10Gbe.

So the devices are no good for that.

The continuity testers are no good, but a real cable tester will show what you want.

You don't want to price out a real cable tester. Just outfit a couple of PC's and function test. MUCH cheaper.

https://www.globaltestsupply.com/product/fluke-networks-dsx-5000-cableanalyzer

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.