Jump to content

klamath

Members
  • Posts

    89
  • Joined

  • Last visited

Everything posted by klamath

  1. Fibre or twinax? I use the same ones, have the exact same issues, pulling out the twinax a few times will force it to connect, the freenas server i have uses fibre and never has a connection issue. Tim
  2. you may need to ifdown and ifup the interface after that echo. #setup bonding for layer2+3 #ifconfig bond0 down;echo 'layer2+3' >/sys/class/net/bond0/bonding/xmit_hash_policy;ifconfig bond0 up
  3. You can just leave it, when you reboot it should reset itself back to defaults since. Tim
  4. You dont have to add each and every one of the mount points into rsyncd.conf, it will also stop you from a double copy if you plan on dumping everything into the same directory. Tim
  5. Also if you want "local" mounts enable NFS sharing then do something like this: All on source: mkdir /new_server mount IP_OF_OLD_SERVER:/PATH_TO_SHARE /new_server (mount 192.168.1.100:/mnt/user/Movies /new_server) rsync -av --progress /mnt/user/Movies /new_server Never use samba/smb to transfer linux to linux, is slow and shitty protocol. You will have to remount the share after each transfer. Tim
  6. I think KISS is your best bet here. Pre-make all the shares so both sides have the exact same share names. Then just rsync src to dest like this: rsync -av --progress /mnt/user/Movies root@IP:/mnt/user/Movies (throw in the weaker encryption from earlier post) The benefit to that is you can launch multiple rsyncs, all copying to their final resting spot. Id recommend doing a permission change at the very end to make sure all is well on the unraid server with perms go. Tim
  7. Why wouldn't you pre-create the shares and go share to share copy? Also, did you setup the rsync server on the destination side? If not, you should use rsync over SSH with minimal encryption something like: rsync -av --progress -e "ssh -T -c arcfour -o Compression=no -x" <source_dir> user@<source>:<dest_dir> Or create NFS mounts and do rsync "local" to avoid any encryption. Tim
  8. https://www.amazon.com/gp/product/B014QCETU4/ref=oh_aui_detailpage_o02_s00?ie=UTF8&psc=1 That will be my 3rd one so far i've deployed in my home, replacing the buggy qlogic cards I have. Tim
  9. Need 24 drives and on a budget, got the sas expander for $100 and a p410 (for flashing only) for $15. 10GB Nic was $29, I was looking at other sas expanders but $300+ is too rich for my blood. Tim
  10. Understood, I thought I read that you couldn't rearrange the drives in DP. Everything should be rolling in by Friday, can't wait to get rid of this lag. Tim
  11. 3 HBAs total, removing two of them, one 4X goes to expander, one 16X for 10GB NIC. Tim
  12. Howdy Yall, Time to make way for 10gb networking in my last server. To free up a PCI slot I decided to replace two HBA's with a HP SAS expander. Im assuming the drive order might be messed up after I online the SAS card, is this true? Is a new config a good option or remove the 2nd parity drive before adding the sas card? Tim
  13. Please tell me how you make out, Unraid working with anything but a worm type workload has been terrible experience. I love unraid for what it is, block storage is something unraid simply is not good at. I use ghetto-vcb for vm backups from my ESX server and noticed the sleep function of the disks played havoc with ESX with no running workload attached and the dreaded nfs stale file handle caused issues too. If your using unraid as a backup device i would recommend only having the NFS mount at time of backup, not using a cache disk your backup times will be pretty significant. Tim
  14. NFSv4 does not work at all, I recompiled the kernel to include NFSv4 support and found it slower then v3. Tim
  15. Path should be IP:/mnt/user/ESX_Datastore using a linux box with showmount -e $IP should show you all the exports if they have read access acl. example: root@raspberry:/orion# showmount -e 192.168.1.99 Export list for 192.168.1.99: /mnt/user/vmbackup * Tim
  16. Yep, used to own a dell powerconnect, never again. Tim This is my switch's configuration, it seems to be purely port-based multi-link. I can't post the output because as soon as i click Apply it just goes down and I can't navigate the webgui. To bring it back up I had to wipe the USB drive and reinstall everything. Yeah... after a while I'm starting to realize that that think is a piece of junk... It claims a lot but in reality it sucks.
  17. Your switch does not support dynamic LACP, it is another broken dell implementation. Id recommend the Cisco SG-200 series if you want a mondernish switch that supports most things. Ive upgraded to D-Link 10GB in the house because LACP is rather crappy. http://en.community.dell.com/support-forums/network-switches/f/866/p/19169209/19297716 Static LAG is your only option, if you loose one link your hosed. Tim This is my switch's configuration, it seems to be purely port-based multi-link. I can't post the output because as soon as i click Apply it just goes down and I can't navigate the webgui. To bring it back up I had to wipe the USB drive and reinstall everything.
  18. Take some screenshots of the switch lacp config, there are different hashs for loadbalancing with multi-links, mac, mac + ip, ip+port, etc etc. Im interested to see what your min/max links are for this new lag, im assuming you set min/max at 4. When you switch the config on, can you look at your servers output of /proc/net/bonding/bond0 and post the output, interested to see if the heartbeat is making it on to your nics. Tim I'm a total noob, how do I post the syslog output? On the switch I can create up to 6 groups(I'm using 2, 1 for unRAID and 2 for my Mac) of up to 8 ports(I'm using 2 ports for the Mac and 4 for unRAID). The load balancing I'm using is 802.3ad which is supported by the switch.
  19. can you post some syslog output when the changes goes active? What is your Min/Max members per lacp group? what other groups are configured for lacp? what is the load balancing algo for your lacp setup?
  20. this is helpful: root@orion:~# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2+3 (2) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 00:X Active Aggregator Info: Aggregator ID: 1 Number of ports: 2 Actor Key: 9 Partner Key: 4 Partner Mac Address: 54:X Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 1 Permanent HW addr: 00:X Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 00:X port key: 9 port priority: 255 port number: 1 port state: 61 details partner lacp pdu: system priority: 32768 system mac address: 54:X oper key: 4 port priority: 32768 port number: 4 port state: 63 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 1 Permanent HW addr: 00:X Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 00:X port key: 9 port priority: 255 port number: 2 port state: 61 details partner lacp pdu: system priority: 32768 system mac address: 54:X oper key: 4 port priority: 32768 port number: 3 port state: 63
  21. Things i would try at this point: kernel option 'pcie_aspm=off'' ethtool -G eth0 rx 4096 tx 4096 ethtool -G eth1 rx 4096 tx 4096 ethtool --offload eth0 gso off tso off sg off gro off ethtool --offload eth1 gso off tso off sg off gro off Tim Edit: Some other network vars i have tuned in my bonded setup using e1000's: sysctl net.core.rmem_max=16777216 sysctl net.core.wmem_max=16777216 sysctl net.ipv4.tcp_rmem='4096 87380 16777216' sysctl net.ipv4.tcp_wmem='4096 65536 16777216' sysctl net.core.netdev_max_backlog=5000 sysctl net.nf_conntrack_max=700000
  22. Howdy, Wondering if iscsi support is baked into KVM with 6.1/6.2 release? I would like to run iscsi targets for my KVM guests rather then running them over NFS setup like i have now. It seems i can define storage pools in KVM, however the package support for iscsi stopped on 13.x slackware branch. Tim
  23. you can get more info by looking at /proc/net/bonding/bond0 Im doing LACP with layer 2+3 load balancing: root@orion:~# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2+3 (2) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 2 Number of ports: 2 Actor Key: 9 Partner Key: 1003 Partner Mac Address: 1c:de:a7:30:aa:03 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:90:d5:17:34 Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 0 port key: 9 port priority: 255 port number: 1 port state: 61 details partner lacp pdu: system priority: 1 oper key: 1003 port priority: 1 port number: 52 port state: 61 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:25:90:d5:17:35 Slave queue ID: 0 Aggregator ID: 2 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 0 port key: 9 port priority: 255 port number: 2 port state: 61 details partner lacp pdu: system priority: 1 oper key: 1003 port priority: 1 port number: 51 port state: 61 root@orion:~#
  24. Please make 8021q a module in the current unraid kernel to support vlan configurations via the cli. Tim
  25. Howdy Mods, Can this please be moved into the Defect ticket? This issue is still occurring with the 6.0.1 release. ------------[ cut here ]------------ WARNING: CPU: 2 PID: 0 at net/sched/sch_generic.c:303 dev_watchdog+0x194/0x1fa() NETDEV WATCHDOG: eth0 (e1000e): transmit queue 0 timed out Modules linked in: kvm_intel kvm vhost_net vhost macvtap macvlan tun md_mod ebtable_filter ebtables iptable_filter ip_tables w83795 w83627ehf hwmon_vid jc42 coretemp bonding ata_piix i2c_i801 e1000e ptp pps_core mpt2sas raid_class scsi_transport_sas acpi_cpufreq [last unloaded: md_mod] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G W I 4.0.4-unRAID #5 Hardware name: Supermicro X8SAX/X8SAX, BIOS 2.0b 03/08/2013 0000000000000009 ffff88041fc43dc8 ffffffff815ff789 0000000000000000 ffff88041fc43e18 ffff88041fc43e08 ffffffff810443a3 ffff88041fc43e18 ffffffff8153b632 ffff88040a608000 ffff880409e6ae00 ffff88040a6083a0 Call Trace: <IRQ> [<ffffffff815ff789>] dump_stack+0x4c/0x6e [<ffffffff810443a3>] warn_slowpath_common+0x97/0xb1 [<ffffffff8153b632>] ? dev_watchdog+0x194/0x1fa [<ffffffff810443fe>] warn_slowpath_fmt+0x41/0x43 [<ffffffff8153b632>] dev_watchdog+0x194/0x1fa [<ffffffff8153b49e>] ? dev_graft_qdisc+0x69/0x69 [<ffffffff8153b49e>] ? dev_graft_qdisc+0x69/0x69 [<ffffffff8107de70>] call_timer_fn.isra.29+0x17/0x6d [<ffffffff8107e8d7>] run_timer_softirq+0x1b1/0x1d9 [<ffffffff81047185>] __do_softirq+0xc9/0x1be [<ffffffff8104740f>] irq_exit+0x3d/0x82 [<ffffffff81030a68>] smp_apic_timer_interrupt+0x3f/0x4b [<ffffffff81605c9d>] apic_timer_interrupt+0x6d/0x80 <EOI> [<ffffffff81088441>] ? clockevents_notify+0x1c8/0x1d6 [<ffffffff814f0869>] ? cpuidle_enter_state+0x49/0x9f [<ffffffff814f0862>] ? cpuidle_enter_state+0x42/0x9f [<ffffffff814f08e1>] cpuidle_enter+0x12/0x14 [<ffffffff8106dfbd>] cpu_startup_entry+0x1d3/0x2da [<ffffffff8102f040>] start_secondary+0x122/0x140 ---[ end trace 068f011acc5f3d9a ]--- e1000e 0000:06:00.0 eth0: Reset adapter unexpectedly bond0: link status definitely down for interface eth0, disabling it e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None bond0: link status definitely up for interface eth0, 1000 Mbps full duplex mdcmd (255): spindown 6 mdcmd (256): spindown 16 mdcmd (257): spindown 4 mdcmd (258): spindown 13 mdcmd (259): spindown 5 mdcmd (260): spindown 7 mdcmd (261): spindown 13 mdcmd (262): spindown 14 mdcmd (263): spindown 3 mdcmd (264): spindown 5 e1000e 0000:06:00.0 eth0: Reset adapter unexpectedly bond0: link status definitely down for interface eth0, disabling it e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None bond0: link status definitely up for interface eth0, 1000 Mbps full duplex root@orion:/# cat /etc/unraid-version version="6.0.1" root@orion:/# uname -a Linux orion 4.0.4-unRAID #5 SMP PREEMPT Fri Jun 19 22:47:24 PDT 2015 x86_64 Intel® Xeon® CPU E5504 @ 2.00GHz GenuineIntel GNU/Linux root@orion:/# ethtool -i eth0 driver: e1000e version: 2.3.2-k firmware-version: 1.8-0 bus-info: 0000:06:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: no root@orion:/#
×
×
  • Create New...