slimshizn

Members
  • Posts

    253
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by slimshizn

  1. Sent my XG back, have been using the LB6M just fine for a while now. Going to order a copper 10G Base-T insert for an additional 10G to another PC.
  2. I see you added this along with fast list. What is the IP? Is that Plex? Going to try it out myself. Edit: Found your reasoning in the rclone forums.
  3. Recently changed over to 10Gbe, and changed MTU size to 9000 on the server and and at the switch. Everything is working fine now but noticed some download issues with a few programs on my windows VM. Do I need to change that to match as well? Thanks!
  4. Just got this hooked up and running great. I did have some issues getting it started but that's only because I didn't know the commands ( NO GUI ). If anyone decides they want this I can take you through what to do to get it up and running. Here's a great forum I read through to get most of my information.
  5. Yeah if those ports aren't working correctly(10Base-T) it should cut down the price a couple hundred dollars. This is a 300 dollar switch at the most.
  6. My XG-16 is a newer hardware revision so I'm not sure what's going on. Other than that I have no issues with the XG as it has the sfp+ ports I need and can use for expansion. I'm going to turn the LB6M into a brocade and probably use that as well, and contact UBNT for a hardware replacement.
  7. Thought I'd put this here if you were interested in using the 4 rj45 ports at some point. Also the SFP + ports have been working fine for me, constant 10G to my other servers/PCs.
  8. Just wanted to update, haven't had a single call trace since I turned off the custom IP containers. Now that things have calmed down I'm going to try your br0.3 idea.
  9. Anyone else have any idea on what's going on here?
  10. Well I did try to increase rx and tx buffers and had some interesting results that didn't happen before. Using ethtool -G eth0 rx 8192 tx 8192 I had this happen. As you can see I got a little carried away at trying to use the command lol. Hope that this can help reproduce this issue for bug zapping. Edit: Interestingly enough I tried this on my other server and it passed it through. Edit 2: Updated to 6.6.3, reboot the server, ran ethtool -G eth0 rx 8192 tx 8192 and it went through without a kernel panic. Testing speeds again.
  11. I have a Mellanox NIC on another server as well and have seen kernal panic once. This was without the use of a user defined static IP for dockers, which makes the argument of docker br0 being the issue a little more difficult. Being that it was a one time occurrence and didn't cause the system to lock up, I'm going to think of it as a fluke.
  12. I can give this a shot and see what happens, I'm not opposed to testing if it fixes the issue. I am on 6.6.2 at the moment but now that I have seen another call trace (thought it might have been fixed in 6.6.2) I'll go ahead and update after the parity check. Also the driver is Mellanox, I believe that is the mlx4 referenced above.
  13. I'm going to try and reproduce the problem on a different server.
  14. Was going good for about a week now. I was told this was fixed in a previous version of Unraid, but today it caused my server to lock up completely making me do a unclean shutdown (parity check as we speak). I can't have that happening so I'm shutting down any fixed IP docker containers hoping that it will fix the problem. I'd like to put this down as a bug but need someone's verification before I do so. Thank you. Here's the Diagnostics. tower-diagnostics-20181024-1640.zip
  15. Came back and crashed the server today. I believe it was due to having SteamCacheBundle with a custom br0 fixed IP. Funny though that none of this happend prior to using a 10Gbe NIC.
  16. I used UD to connect to the other server via SMB protocol, opened krusader and transferred a file off of the cache to the destination server. Fastest I could get to was 124MB/s. Edit, went up to 149MB/s then back down.
  17. I did some more testing. I added an iso to each server, then sent each to a old qnap that I use for appdata backup and important files. From my servers to the qnap Its staying at 60-65MB/s. From qnap back to the servers it's 60-65MB/s. From ANY of the three it's 112-115MB/s to my desktop, or desktop to any server it's 112-115MB/s. My desktop only has a 1Gb NIC. Not sure what's going on with that.
  18. I've also tested the read and write speeds of both SSD's on the source and destination. They are both at the speeds they should be at. Prior to having any 10Gbe, and using 1G ethernet I would always get 112-117MB/s transfer speeds. Since then I have changed out both NIC's and cables.
  19. I've done that, and direct transfer from one to another is still slow. Just to make sure it wasn't the cables, I switched over to a SFP + module and fiber, all brand new. Tested speeds and still SLOW. Direct transfer speed is around 65MB/s.
  20. Are you sure it works like that because prior to my 10gbe upgrade using my desktop I was getting 112 MB/s transferring from one server to the next with no hiccups. Using krusader on server A to transfer to B the speeds are exactly the same as if I were using the desktop.
  21. No problem, thank you for the reply. My desktop is not 10GbE. If I transfer from one server to the other using the cache pool I get 65-75MB/s as the fastest. I've tried all types of files and different sizes.
  22. I listed that as steps I've taken.