Jump to content

ryan

Members
  • Content Count

    64
  • Joined

  • Last visited

Community Reputation

0 Neutral

About ryan

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. No need to assume! you can check with the -vvv option in the ssh commands to determine which cipher is used. Have you tried with FTP for comparison? FTP doesnt transfer all the good stuff (ownership, timestamps etc) but it's good for just comparing speeds to those of rsync. Furthermore, rsync is not a performace transfer tool rather a tool used to sync data and check if all is OK.
  2. I am actually in the process and have spent most of the weekend debugging and looking into this. rsync will use the encryption cipher that ssh uses by standard.. to remedy this, i have enabled all cyphers on the synology box in DSM (choose Terminal > SSH > Advanced > Low Encyption Ciphers), also this on the unRAID box and then used the rsync command with ssh to force to use a different encryption algorithm.. the result? I get the full 115MB/s over rsync to the Syno box, by running two and three of these on a trunk ethernet, i am sending at very high speeds. rsync -av -e 'ssh -c arcfour' /mnt/user/Media/ ryan@192.168.1.193:/volume1/Media/ --progress Give it a try,
  3. OK, just tried that and confirmed it was layer2 (probably because of the reboot.. but i change, and reset the interface state.. What interesting is now when i have set this, Transmit Hash Policy: layer2+3 (2) Actor Churn State: none Partner Churn State: none root@ATLAS:~# cat /sys/class/net/bond0/bonding/xmit_hash_policy layer2 0 root@ATLAS:~# ifconfig bond0 down;echo 'layer2+3' >/sys/class/net/bond0/bonding/xmit_hash_policy;ifconfig bond0 up root@ATLAS:~# cat /sys/class/net/bond0/bonding/xmit_hash_policy layer2+3 2 root@ATLAS:~# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2+3 (2) ...
  4. Not the case, after i swap interfaces back, i get traffic on eth0 but not eth1..
  5. Interesting, i had a look around the web - this unRAID is on a HP NL54 - and i can see others have setup bonding too, so it should work OK. I think the first post i must have looked at the output too eariy, i now see monitoring in the Actor/Partner state: root@ATLAS:~# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 00:24:9b:1a:cf:69 Active Aggregator Info: Aggregator ID: 2 Number of ports: 2 Actor Key: 9 Partner Key: 3327 Partner Mac Address: ec:08:6b:e4:f1:36 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:24:9b:1a:cf:69 Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: monitoring Partner Churn State: monitoring Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 00:24:9b:1a:cf:69 port key: 9 port priority: 255 port number: 1 warning: this system does not seem to support IPv6 - trying IPv4 port state: 61 details partner lacp pdu: warning: this system does not seem to support IPv6 - trying IPv4 system priority: 32768 system mac address: ec:08:6b:e4:f1:36 oper key: 3327 port priority: 32768 port number: 12 port state: 61 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 38:ea:a7:a9:2c:f9 Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: monitoring Partner Churn State: monitoring Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 00:24:9b:1a:cf:69 port key: 9 port priority: 255 port number: 2 port state: 61 details partner lacp pdu: system priority: 32768 system mac address: ec:08:6b:e4:f1:36 oper key: 3327 port priority: 32768 port number: 10 port state: 61 I have swapped the interfaces around - that is to say eth0 <> eth1 in the unRAID configuration to test.. still the same, i do not get the full speed. A few tests I can also see that the links are not "shared" bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet 192.168.1.10 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:24:9b:1a:cf:69 txqueuelen 1000 (Ethernet) RX packets 3842122 bytes 5830623843 (5.4 GiB) RX errors 0 dropped 570 overruns 0 frame 0 TX packets 1395401 bytes 101876433 (97.1 MiB) TX errors 0 dropped 9 overruns 0 carrier 0 collisions 0 eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 00:24:9b:1a:cf:69 txqueuelen 1000 (Ethernet) RX packets 65645 bytes 94136919 (89.7 MiB) RX errors 20 dropped 0 overruns 0 frame 20 TX packets 2664714 bytes 199981043 (190.7 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether 00:24:9b:1a:cf:69 txqueuelen 1000 (Ethernet) RX packets 3842029 bytes 5830605446 (5.4 GiB) RX errors 0 dropped 570 overruns 0 frame 0 TX packets 464779 bytes 32597478 (31.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 18 Looks like eth0 is not used at all.. I will try and swap around again, it could be something to do with the eth0 interface!
  6. In addition, i have tried bonding on my workstation.. and tested the opposite way around, this works fine, i get the full 2gbp/s speed. So i know it's not the switch setup, could it perhaps be the onboard ethernet device on the unRAID that doesn't support this somehow ?
  7. Hmm i have just tried this, seems to make no difference. I can see also with testing the opposite i.e. unRAID as the iperf3 client, and two seperate machines. I get the same.. although it does not seem to be balancing through the interfaces!
  8. Thanks for the clarificaiton on STP. The MAC Address is also fixed: details actor lacp pdu: system priority: 65535 system mac address: 38:ea:a7:a9:2c:f9 port key: 9 port priority: 255 port number: 2 port state: 61 details partner lacp pdu: system priority: 32768 system mac address: ec:08:6b:e4:f1:36 oper key: 2536 port priority: 32768 port number: 12 port state: 61 So, within the TP-LINK:
  9. @bonienl Thanks for you observations - 192.168.1.10 is indeed my unRAID system. The switch was setup with LAG and STP enabled, i also tested without. I have a TP-LINK T1600G-28TS. MAC Address is strange too.. i didnt see that. If it is the switch not setting up the LAG properly, what other settings do i need to consider ? AFAIK i should only select two ports which need to be aggregated?
  10. There are three different machines connecting to two different iperf3 processes within the above test. The link should meet 2gbps ) i actually need 4 but testing with three.
  11. I'm having some issues with my bonding.. i have two NIC which are: root@ATLAS:/mnt/cache/cache_only# sudo cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 38:ea:a7:a9:2c:f9 Active Aggregator Info: Aggregator ID: 1 Number of ports: 1 Actor Key: 9 Partner Key: 3502 Partner Mac Address: ec:08:6b:e4:f1:36 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 38:ea:a7:a9:2c:f9 Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 38:ea:a7:a9:2c:f9 port key: 9 port priority: 255 port number: 1 port state: 61 details partner lacp pdu: system priority: 32768 system mac address: ec:08:6b:e4:f1:36 oper key: 3502 port priority: 32768 port number: 10 port state: 61 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 1 Permanent HW addr: 00:24:9b:1a:cf:69 Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: churned Partner Churn State: churned Actor Churned Count: 2 Partner Churned Count: 2 details actor lacp pdu: system priority: 65535 system mac address: 38:ea:a7:a9:2c:f9 port key: 9 port priority: 255 port number: 2 port state: 69 details partner lacp pdu: system priority: 65535 system mac address: 00:00:00:00:00:00 oper key: 1 port priority: 255 port number: 1 port state: 1 However i am only gettng 1gbit throughput total from three different clients: root 2312 0.7 0.0 6500 1696 pts/1 S+ 10:52 0:52 iperf3 -s -B 192.168.1.10 -p 5053 root 23263 1.6 0.0 6500 1764 pts/3 S+ 12:20 0:25 iperf3 -s -B 192.168.1.10 -p 5054 root 29199 2.3 0.0 6500 1596 pts/2 S+ 10:40 2:58 iperf3 -s -B 192.168.1.10 -p 5202 Three different instances of iperf3, gives: Computer 1 [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 390 MBytes 327 Mbits/sec 71 sender [ 4] 0.00-10.00 sec 388 MBytes 326 Mbits/sec receiver Computer 2 [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 198 MBytes 166 Mbits/sec 1023 sender [ 4] 0.00-10.00 sec 198 MBytes 166 Mbits/sec receiver Computer 3 [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 861 MBytes 722 Mbits/sec sender [ 4] 0.00-10.00 sec 861 MBytes 722 Mbits/sec receiver Any ideas? Note the retries too!?
  12. Noted! So this is a mix of incorrect usage.. and filesystem types.. Thanks! I'll have a play this weekend and see what i can figure out for my usage case.
  13. Yes i think this may be the case.. I have actually figured out that if i take the slots down to 1, then mount the drive changed back to XFS then it mounts OK.. I do not wish to use cache pool, but seperate drives cache1, cache2.. is this possible.. or do i have the usage of cache drives incorrect?
  14. I have added a second cache drive, to be out of the current array.. however upon adding the drive and starting the array.. the existing cache drive was starting to clear?? now both drives are unmountable?! I have seem to have lost my entire app directory for docker.. and all other content on there ?? Why has this happened?! I have temporarily mounted the cache, and i can see that the data is still there root@ATLAS:/mnt/test# ls -la total 0 drwxrwxrwx 6 nobody users 70 Nov 24 20:24 ./ drwxr-xr-x 8 root root 160 Nov 24 22:48 ../ drwxrwxrwx 14 nobody users 247 Nov 24 21:20 appdata/ drwxrwxrwx 2 nobody users 24 Oct 22 17:54 cache_only/ drwxrwxrwx 10 nobody users 332 Oct 15 17:43 downloads/ drwxrwxrwx 8 nobody users 242 Nov 24 22:25 vmware/ How can i get this back into the cache section? Every time i add it says i need to format..
  15. perhaps try lsof | grep /mnt/ and see if there is anyhting you recognise?