ajgoyt

Members
  • Posts

    171
  • Joined

  • Last visited

Everything posted by ajgoyt

  1. I beleive I found a bug with RC5 both my cards and SFP cables are fine - Mike and John see this post - that RobJ moved me too. http://lime-technology.com/forum/index.php?topic=51676.new#new
  2. If the mellanox is your main NIC and it has internet connectivity set it to eth0, you have to reboot after. it's been my main NIC since i switched to 10gb for my unraid and my workstation, So i did exactly that on the interface rules i set eth0 to the connectx-2 and the mb lan port to what the mellanox was (basically swapped) and it wanted a reboot, I did and still no activity lights on mellanox and im glad i have the gui on my monitor because its says eth0 is down. I think between the upgrade originally from 6.1.9 to RC5 my mellanox card died. Now tonight i can swap cards with this win10 workstation to confirm, I really don't want to disturb my sata pci card in unraid. i think its just a fluke that the mellanox died... hopefully its not my PCIX8 slot. I will be back tonight for more t/s to confirm my mellanox is DOA thanks for the help! AJ Tom and John there is something going on with RC5 I swapped cards with the Win 10 workstation (as i type this) and the connectx-2 is working great, So for grins I put the Win 10 card in a different PCIX8 slot that was working just fine with my GT220 vid card and still no lights etc , the sys devices sees the card. So I have verified my cards are just fine and SFP cables. I will run RC5 with my MB 1gb lan port till there is a fix or if you want I can do some testing for you. I have attached my Diagnostic to this. please let me know with detailed instructions for any testing! AJ tower-diagnostics-20160907-7-14PM.zip
  3. If the mellanox is your main NIC and it has internet connectivity set it to eth0, you have to reboot after. it's been my main NIC since i switched to 10gb for my unraid and my workstation, So i did exactly that on the interface rules i set eth0 to the connectx-2 and the mb lan port to what the mellanox was (basically swapped) and it wanted a reboot, I did and still no activity lights on mellanox and im glad i have the gui on my monitor because its says eth0 is down. I think between the upgrade originally from 6.1.9 to RC5 my mellanox card died. Now tonight i can swap cards with this win10 workstation to confirm, I really don't want to disturb my sata pci card in unraid. i think its just a fluke that the mellanox died... hopefully its not my PCIX8 slot. I will be back tonight for more t/s to confirm my mellanox is DOA thanks for the help! AJ
  4. I believe you have to turn off bonding, these are my settings, connect-x 2 is eth1: Johnnie ok turned off bonding and unraid became unresponsive waited about 5 minutes, rebooted from the gui and bonding is off. I ported up on eth2 which is the connect x-2 card and it thought for a bit and then the menu came back set the ip assigment to auto and it still says in red (at the very bottom) eth2 is down check cable. Do i set eth0 with the interface rules to connectx-2 or?
  5. mlx4_core is the driver, I have the Connect-X 2 and they work fine on v6.2, like Bonienl pointed out, set it as eth0 or press port up. Also don't forget that on v6.2, as long as you have console access, through IPMI or local monitor, you can boot with the GUI option and change network settings with the inbuilt browser. Ok I will have to go to work in awhile but I followed Bonienl advice - deleted the network settings and rebooted, my static ip i reset later , then i reinstalled rc5 and it installed fine rebooted again. Dockers are back Plex-Emby but need updating don't care i stopped them for now. I am not touching nothing in Network settings as they look different then before! and i still have no activity or power light on my Connect x-2. Even though the connect x-2 card is not lit up it showing in devices. I am in on the MB 1gb lan port. I have attached another diagnostic, and network settings it appears bonding is set to active backup 1 on eth0 , and the members are eth0,eth1,eth2 , if I look at eth1 and 2 - it says D0:50:99:8A:9B:17 - interface is shutdown (inactive) probably because bonding is on? I am a little confused on what i should do? tell me if this is wrong... Do i change the eth from the interface eth 0 or do i use the interface rules at the bottom for priority of eth? I have about an hour before i need to get ready for work if someone can set me straight, Slap me lol i would appreciate it..... tower-diagnostics-20160907-0645.zip
  6. Have you tried "Port Up" to make the inactive port active? sorry how do you port up? Go to network settings and under the corresponding interface is a button labeled 'Port Up' (only available for ports other then eth0) well i messed up i guess at the bottom for eth0 i set it to the mellanox 4 which has to be my 10gb card but in reality its a connectx 2 , rebooted and still no activity lights on the 10gb card, and now i cannot get into the main page, on my monitor it say Device eth0 does not exist so crap I guess ill try to fix by going back a version or? Powerdown your unRAID system, and take your flash device out and put it in another PC. Then delete the files network.cfg and network-rules.cfg under folder config. When you now boot unRAID it will start with default settings, which make all interfaces available. From this point you can (re-)configure your desired network settings. bonienl while waiting for you to reply i shutdown and took out the flash drive and then copied bzroot and bzimage to the root and rebooted and it went back to 6.1.9 , I still dont have activity lights on my 10gb all shares etc are ok by what i can tell but now my dockers are missing! My cache drive SSD is being read from the main page and my docker file 52 gb is still in the docker file, what do you suggest i do at this point? 1. upgrade back to RC5, then work on getting back my two dockers or? 2. stay with 6.1.9 and then fix the dockers and then to try to get the 10gb card working... sorry i will wait till you reply,,, the only dockers i had were plex and emby!
  7. Have you tried "Port Up" to make the inactive port active? sorry how do you port up? Go to network settings and under the corresponding interface is a button labeled 'Port Up' (only available for ports other then eth0) well i messed up i guess at the bottom for eth0 i set it to the mellanox 4 which has to be my 10gb card but in reality its a connectx 2 , rebooted and still no activity lights on the 10gb card, and now i cannot get into the main page, on my monitor it say Device eth0 does not exist so crap I guess ill try to fix by going back a version or?
  8. Have you tried "Port Up" to make the inactive port active? sorry how do you port up? wait could it be that i have to stop the array then go into network settings and set the eth0 as 1st and then eth1 as 2 and so on? - after the reboot i see more lan ports, see the screen shot. I am not running a connectx 4 im running a 2 , looks like unraid only has connectx 4 drivers? or maybe there backward compatible!
  9. Upgraded to rc5 and my Mellanox connectx 10gb card says inactive in the network settings, I got to the network settings by hooking up to my 10gb switch on the 1gb side with cat 6 and to the mb lan port and got in, I performed a full parity check ran great with no errors, parity was successful and then rebooted thinking that its just a fluke that my 10gb card is not even lit up. same issue no 10gb lights reset the sfp copper cables in card and the switch still no activity lights on the card. if i go to the network settings it says this - D0:50:99:8A:9B:19 - Interface is shutdown (inactive) if i go to devices system devices it shows the following (below), My mellanox connectx card was just working in 6.1.9 , do i have to enable this 10gb card through some special settings i am not seeing? now i do have (2) 1gb lan ports on the MB and the other 1gb port does not light up, so maybe its the one mentioned above that's inactive? 0b:00.0 Ethernet controller [0200]: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] [15b3:6750] (rev b0) Diagnostics attached... did look around in the ethtool config and do see this... Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: Unknown! Duplex: Unknown! (255) Port: FIBRE PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000014 (20) link ifdown Link detected: no driver: mlx4_en version: 2.2-1 (Feb 2014) firmware-version: 2.9.1000 expansion-rom-version: bus-info: 0000:0b:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: no supports-register-dump: no supports-priv-flags: yes -------------------------------- AJ tower-diagnostics-20160906-2154.zip
  10. The addition of this option is a bug fix: it fixes slower than expected 10Gb/sec performance, and we chose to fix it in this release. We explicitly marked this "experimental" and "off" by default because it has only limited testing. Tom I need the 10gb when moving files from workstation to my server, So since this is the last RC before the official barring no major issues, I should be fine to upgrade from 6.1.9, your thoughts? It's been fine to upgrade for months. Ok Tom I will update tonight following the instructions for 6.2, I run No vm's but I do run a few dockers Plex - Emby etc If I recall I will have to rebuild / reload the dockers? what is the safest - install from the plugin or download and generate a fresh install? in either case i will back up my thumb drive 1st...
  11. The addition of this option is a bug fix: it fixes slower than expected 10Gb/sec performance, and we chose to fix it in this release. We explicitly marked this "experimental" and "off" by default because it has only limited testing. Tom I need the 10gb when moving files from workstation to my server, So since this is the last RC before the official barring no major issues, I should be fine to upgrade from 6.1.9, your thoughts? I know there's always a risk with a BETA or RC or even stable - So are we weeks away from official 6.2 or?
  12. And NVMe cache support. Is NVME cache support coming for final? I can hardly wait to switch for the turbo write support, I transfer files from my win10 workstation to Unraid 6.1.9 from my connectx2 cards through my 10gb switch and still drop from 320mb/s to about 50mb/s half way through the transfer on platter drives in the array, Now from SSD on workstation (win10) to SSD cache the transfer is pretty steady at 300+.
  13. No, an MTU of 1500 does not indicate Jumbo frames. I believe your right , I found an article at http://lime-technology.com/wiki/index.php/Improving_unRAID_Performance#Jumbo_Frames that talks about changing it under jumbo frames example # ifconfig {interface-name} mtu {size} # ifconfig eth0 mtu 9000 QUESTION - based off the pictures ive attached what would be my interface name for my Mellanox (eth2) or (FIBRE) this shows up under system devices - 0b:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0) sorry a little noobish with linux commands.... Also anyone have any ideas on the W10 driver setting to choose? Before you make any changes, check first the maximum MTU size supported by your switch/router. It will drop those frames if too large. Ok thanks for the tip , I just found this in the manual for the switch ........ what do you think would be a good number too try 1st - maybe 65000? l2mtu (integer [0..65536]; Default: ) Layer2 Maximum transmission unit. Read more>> mtu (integer [0..65536]; Default: 1500) Layer3 Maximum transmission unit Any idea about this ------QUESTION - based off the pictures ive attached what would be my interface name for my Mellanox (eth2) or (FIBRE) this shows up under system devices - 0b:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0) sorry a little noobish with linux commands....
  14. No, an MTU of 1500 does not indicate Jumbo frames. I believe your right , I found an article at http://lime-technology.com/wiki/index.php/Improving_unRAID_Performance#Jumbo_Frames that talks about changing it under jumbo frames example # ifconfig {interface-name} mtu {size} # ifconfig eth0 mtu 9000 QUESTION - based off the pictures ive attached what would be my interface name for my Mellanox (eth2) or (FIBRE) this shows up under system devices - 0b:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0) sorry a little noobish with linux commands.... Also anyone have any ideas on the W10 driver setting to choose?
  15. Hi guys need a little help here with my Connectx-2's in my W10 machine and Unraid 6.1.9 transfer speeds between the (2) I purchased a Mikrotik CRS226-246-25+ -IN , the switch has (2) 10GB spf ports , both computers have connectx-2 cards hooked up to the 10GB spf ports and both are working fine, I can see that the MTU size on the Unraid machine is set to 1,500 which appears to be Jumbo Frames? On the W10 machine looking through the adapter properties and i see multiple settings, by default the driver set this to default balanced tuning and i could only get about 110mb/s, I then picked multicast based off the description in the driver it seemed the best option and my speeds raised to 312mb/s max. Now these transfers are either with a movie of about 5gig or a linux ISO , I have also noticed at times that the speed will drop to around 50mb/s for what ever reason I am not sure, then other times it will be steady around the 305 - 310mb/s ..... I have seen these speeds either to my OCZ 480gb SSD or to any of the SATA 3 drives. Is there bottle-neck (I hope not as per i believe i have a pretty fast server) Or is this either a setting in the Unraid or on the W10 driver setting that i am missing? Please let me know your thoughts! AJ
  16. Probably Mentioned here is a link to the announcement - http://www.seagate.com/about-seagate/news/seagate-now-shipping-10tb-helium-enterprise-drive-master-pr/ And an Amazon price ughhh - http://www.amazon.com/gp/product/B01DAI6JUS/ref=olp_product_details?ie=UTF8&me=
  17. and if you try moving some test files to the Cache will they transfer?
  18. Frank will the Turbo write eliminate the network error issue while copying large or slightly amounts of data from a win pc using teracopy or just win 10 transfer? I specifically upgraded both my Unraid machine and windows machine to I5's with 32gb of ram on very good MB's and still get the network error issues on transfers typically to an unraid share that's close to being full. It's irritating at times to come home and find out that teracopy crapped out with the network error on the 5th file. I have since been using MC to move files once their on the unraid machine and it seems to be stable. I am on unraid 6.1.9 pro. If i copy from windows to the Cache drive (only 240gb SSD) the network errors are basically gone, but it seems that i have moved a 25gb iso before to the cache and it also gave the network error. but most of the time the transfers to the Cache were fine, keep in mind these transfers are smaller in size .... then once on the cache drive to a shared drive on the array the transfers appear to be somewhat stable on smaller files - ie 8gb-12gb... thanks AJ
  19. Krackato did you ever get the mappings figured out? , I would like to install this container but am not sure what goes in my cache drive folder and what doesn't ?
  20. Still an issue for me, I can download 10 files at a time or just a single file, NO FILE WILL GO FASTER THAN 1.5. It is not the site i download from as i can easily get over 20 on a single file with filezilla on my imac Thanks Scott , I would like to integrate an FTP client into my unraid system but I have the same issue as you but on WIN10. I bet your on comcast as i am? I have since moved onto a different FTP client that will allow multiple connections to a single file, segmented downloading, Cuteftp 9 and Free download manager do this perfectly and i am not limited to 1.5mbps per file. I have also created a topic on filezillas site as other peeps have and the admin / team have ignored the fact that people want multipart/segemented downloads. Maybe i should recommend FDM as a docker to be created, Cuteftp 9 is not free... unlike FDM or Filezilla. Good Luck AJ
  21. Just curious if people are beyond the 1.5mb/sec cap with this container?
  22. Question - sorry didnt read all of the 40 pages of threads, , Cant i just map the shares that i want crashplan to backup on my Windows 10 machine and then run the crashplan app off Windows 10 / not a VM just another Win 10 machine on the network? I know it wouldn't be automatic like the docker would be... 95% of the time the Win10 machine is turned on and files are 100% copied over from the Win10 machine to Unraid. Maybe this has been discussed but this approach seemed easy to me! AJ
  23. Thanks for this per i am a plex pass user and the official Plex dock from LT shows an update and i restarted the docker multiple times and its still the same version, I will switch to the linuxserver plex docker using this guide. AJ
  24. I would copy all of the info from the disk you want to replace to a user share on the protected array, this step is probably not needed since you have a pool running already and i assuming you let it finish mirroring onto the second cache disk? Then i believe the next step would be to stop the array and scroll down to the Cache disk you want to replace and then add the disk to replace it make sure the file system on the new disk is set to BTFRS, once you pick the disk to replace just start the array... it should then start to mirrior the new disk ... if any global mods want to jump in maybe i missed a step! AJ