• Posts

  • Joined

  • Last visited

Everything posted by Jclendineng

  1. Can you use vault to store docker secrets? I am not aware of any way to based on the docs but thought I would ask. An example is, say, a DB connection you are passing creds to in a docker application in unraid. I know you can use docker secrets (not easily in unraid) but vault would be nice to use for this.
  2. FYI it is not fixed in 6.10.3, they just removed eth2, so I have eth0 (mellanox port 1), eth1 (onboard management nic), and eth2 (mellanox port 2). Eth0/2 have the same MAC address now and interface rules only shows eth0/1, with my duplicate Nic in the list along side the built in Nic. So its still duplicate...
  3. is there a slack alternative to Speedtest-cli or fast for linux? It would be handy to have those packages in nerd tools.
  4. It wasn't listed as it wasn't a known issue until after the release I have mellanox as well and its not been fun times. I guess live and learn when it comes to older nics, OPNSense (BSD) doesn't support mellanox either in certain bridging modes so at this point I may just try and offload everything and get intel 10g nics. Eagerly waiting for a fix in the meantime though, as always thanks to the team for incredible work, there are a gazzilion configurations out there and so missing a few is understandable and honestly expected as no one can predict what nics people are using.
  5. Awesome! I was wondering if us mellanox folks were going to have issues. Seems that brand isn't well supported on anything I run meaning my next purchase may need to be intel...
  6. Interesting, it appears that the changes to prevent dupe nics removed the interface rules by accident, I am also seeing changes to the network.cfg file removing the interface Macs so unless I'm missing something it is no longer possible to edit the interfaces directly from config either...correct me if they were moved and I missed it.
  7. FYI it appears interface rules was removed in .10.2 release, so no longer can change to eth0 Edit: Disregard, it appears to be a known bug that many are reporting now, so a fix should be coming, they did not remove it on purpose.
  8. Cool cool I'll check it out. Im lagging 2 10gb connections to my aggregation switch so definitely enjoy reading other peoples experiences with this. IMO 10gb is the sweet spot currently as 40gb isn't quite ready for *most* home users. Back to this, updated to the new patch and my duplicates are still gone so already better then 6.8. Glad a permanent fix is coming. Thanks to all the unraid devs for the hard work.
  9. OK just making sure, I didn't fully understand from your post if you had tried the GUI post-6.10. Mine is now fixed, but maybe as you say its only because I haven't tried to edit.
  10. Maybe the dupes came from editing the network.cfg manually, I don't know that it's recommended. You can try doing it from the GUI, looks like that works and is persistent...I wonder what they fixed? From the reports, they didn't think it was an issue per se so Im assuming they didn't do anything, it was one of the many upstream driver/dependency/kernel updates. In any case I am resting a bit easier knowing that 1 inconsistency is fixed and I don't have to worry about why anymore.
  11. Are you guys saying do NOT use bonding on 6.10? I use bonding to lagg 2 10gbe ports to my aggregation switch. Is that not recommended?
  12. Ill note that it looks like you are using single stream iperf in which case that's normal. Try iperf3 -c server -P8 for 8 streams. Also I would NOT change your MTU that's asking for trouble. 1500 is plenty for 10gb. jumbo Is not needed unless you know what its used for and you need it. Its going to give you trouble down the line. Remember, anything on the network that talks to that needs to have the MTU tuned now and if there is any mismatch you will drop packets. my suggestion is leave it default and save yourself the future problems.
  13. Was this fixed for you in the 6.10.0 Stable? I swear it was still broken in the betas but lo and behold, it is correct for me now. Edit: The only thing I did recently would (should) have 0 affect on networking. I needed to (for long story reasons) use a cache drive nvme in another server, and I had another one I was swapping in. So I set the cache to yes so it moves everything to the array. I then stop the array, disable VMs and Docker and start up again so I can move the system folder domain folder. I then replaced the cache drive, made sure it was up and running, and set the cache to prefer so it copies back to the cache. When that's done I set to cache only as it was before, stop the array, enable Docker and VMs and start up again. I then noticed an issue with my 801ad lagg bond and went to network settings to fix and noticed the bond was screwed up, it had dropped my duplicate nic. So I re-did my lagg, set up the vlans again and started the array. Not sure what any of those steps had to do with network setup except that I did have docker and vm services stopped when I rebooted with the new cache drive so potentially that cleared something in the config that docker or vm manager was doing to create the dupe nic.
  14. I just don't touch it and it seems to work OK. Still shows multiple dupes but that's fine as long as you don't try re-arranging them it works.
  15. Ill note that if you want to place unraid behind a load balance/reverse proxy and handle ssl termination yourself, you need to set "Use SSL" to No and you will be all set. Auto isn't really that great as what's the point of a local server if you need to reach out to the internet to use it? Doesn't make sense imho. Use SSL Yes might make sense if you don't have a local domain you are already using but really screws that up if you do, so just be aware, if you want to use this as "unraid.localsite.net or whatnot internally you need to turn off SSL on the unpaid side to get it to redirect properly.
  16. A couple of us with same issue, any dual port mellanox nic + onboard nic - basically the same stuff you are seeing. Did you figure out what was wrong? I filed a bug report and they said it was card related, so did you have to flash anything? I have updated to the latest FW and disabled netboot for faster startups. Can recreate with all my dual port cards, single port seems fine, but I only have 1 to test.
  17. OK I filed a bug report, its not an unraid issue, still trying to track down how unraid determines the nic as I received conflict information, that unraid does not use the same Mac for multiple interfaces, and yet a clean install assigns 1 Mac for port 1 and 2 Macs for port 2. ill report back if I find info. im going to stand up a Debian server to see if I see the same result and that will 100% determine if unraid related. Edit: see this:
  18. Yes but you have 2 dual ports. Im wondering if unraid has issues with only 3. When I tested with a single port connectx3 it showed up fine (1 Mac for that, 1 Mac for onboard ethernet). When I moved to test any of the dual ports I have, onboard shows up as 1 and dual port shows up as 3. Possibly unraid need pairs of ports? 2/4 vs 2/3? Just a thought. Yea all cards are on the latest FW from Nvidia, the screenshot of the card info I posted is from one of the cards Im having issues with. Im getting another brand dual port sfp+ this weekend so Ill have 3 to test with. Thank you!
  19. Ignore? haha! That's a good way to solve problems, maybe just not this one But seriously, unraid does not let you have (obviously) multiple same Macs and so it will restrict you from modifying the assignments. Plus if it is not expected functionality its not good to ignore as anything unexpected when a ton of data is concerned isn't great See attached... The initial diagnostics attached should be similar, I performed this step prior to those as well. This diagnostics was taken after a clean install. I re-set up all my shares/dockers, but the USB is a clean install. I then removed eth2 in this case from the config file, verified it went away in the gui, and rebooted. On boot, eth2 showed up again. Thank you for assisting! Edit: I can move this to the stable bug reports if needed, when I reinstalled last night, I installed 6.9.2 just for kicks and it does the same thing. So not a beta issue... tower-diagnostics-20220325-0816.zip
  20. I have some mellanox cards I've been testing in unraid and I get duplicate Mac addresses. Attached are 2 screenshots and diagnostics. You can see I have duplicate Macs in the network section and when I try to reassign them it states I have duplicates (doh!). Last screenshot shows card itself only has the normal 2 Macs, 1 per port. Interestingly, I seem to get a Mac duplicated at every reboot. Edit: Card is MCX312A (dual port) and MCX311A (single port) - single port works great and shows 1 Mac. Both dual ports I tested with showed any number of duplicate Macs depending on when I tested. Edit2: eth0, eth1, eth2 are all fine, and show up. I have an eth4 that has the duplicate Mac on it, and occasionally eth5. Those are not visible on the network page except in the assignment section. tower-diagnostics-20220324-1517.zip
  21. Welp that's a hard no. I tested with a single port and it flashed fine, put a dual port in and unraid had a hernia. It adds a duplicate Mac every time I reboot so I am going to file a bug report in the bug report section as this hasn't gotten any replies in...almost ever I can replicate though, I tested with a dual port sfp+ card and it does indeed have duplicates in the OS.
  22. Ill take a stab at this, what model of Nic? I had a similar issue with Mellanox ConnectX3 MCX312A that I was able to fix. Duplicate Macs are the result of a bad or improper firmware flash so if you bought them off eBay like that, seller didn't understand how to flash when upgrading firmware Basically I just flashed the static Mac and guid again. https://forums.servethehome.com/index.php?threads/mellanox-connectx-3-en-duplicate-permanent-mac-address-issue.24790/ Im sure that's probably your issue, though I don't know how to flash HP cards (should be some documentation online)
  23. A couple standalone connectors work, barely, there is a lot of play though. Never had that happen.