• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jclendineng's Achievements


Rookie (2/14)



  1. Can you use vault to store docker secrets? I am not aware of any way to based on the docs but thought I would ask. An example is, say, a DB connection you are passing creds to in a docker application in unraid. I know you can use docker secrets (not easily in unraid) but vault would be nice to use for this.
  2. FYI it is not fixed in 6.10.3, they just removed eth2, so I have eth0 (mellanox port 1), eth1 (onboard management nic), and eth2 (mellanox port 2). Eth0/2 have the same MAC address now and interface rules only shows eth0/1, with my duplicate Nic in the list along side the built in Nic. So its still duplicate...
  3. is there a slack alternative to Speedtest-cli or fast for linux? It would be handy to have those packages in nerd tools.
  4. It wasn't listed as it wasn't a known issue until after the release I have mellanox as well and its not been fun times. I guess live and learn when it comes to older nics, OPNSense (BSD) doesn't support mellanox either in certain bridging modes so at this point I may just try and offload everything and get intel 10g nics. Eagerly waiting for a fix in the meantime though, as always thanks to the team for incredible work, there are a gazzilion configurations out there and so missing a few is understandable and honestly expected as no one can predict what nics people are using.
  5. Awesome! I was wondering if us mellanox folks were going to have issues. Seems that brand isn't well supported on anything I run meaning my next purchase may need to be intel...
  6. Interesting, it appears that the changes to prevent dupe nics removed the interface rules by accident, I am also seeing changes to the network.cfg file removing the interface Macs so unless I'm missing something it is no longer possible to edit the interfaces directly from config either...correct me if they were moved and I missed it.
  7. FYI it appears interface rules was removed in .10.2 release, so no longer can change to eth0 Edit: Disregard, it appears to be a known bug that many are reporting now, so a fix should be coming, they did not remove it on purpose.
  8. Cool cool I'll check it out. Im lagging 2 10gb connections to my aggregation switch so definitely enjoy reading other peoples experiences with this. IMO 10gb is the sweet spot currently as 40gb isn't quite ready for *most* home users. Back to this, updated to the new patch and my duplicates are still gone so already better then 6.8. Glad a permanent fix is coming. Thanks to all the unraid devs for the hard work.
  9. OK just making sure, I didn't fully understand from your post if you had tried the GUI post-6.10. Mine is now fixed, but maybe as you say its only because I haven't tried to edit.
  10. Maybe the dupes came from editing the network.cfg manually, I don't know that it's recommended. You can try doing it from the GUI, looks like that works and is persistent...I wonder what they fixed? From the reports, they didn't think it was an issue per se so Im assuming they didn't do anything, it was one of the many upstream driver/dependency/kernel updates. In any case I am resting a bit easier knowing that 1 inconsistency is fixed and I don't have to worry about why anymore.
  11. Are you guys saying do NOT use bonding on 6.10? I use bonding to lagg 2 10gbe ports to my aggregation switch. Is that not recommended?
  12. Ill note that it looks like you are using single stream iperf in which case that's normal. Try iperf3 -c server -P8 for 8 streams. Also I would NOT change your MTU that's asking for trouble. 1500 is plenty for 10gb. jumbo Is not needed unless you know what its used for and you need it. Its going to give you trouble down the line. Remember, anything on the network that talks to that needs to have the MTU tuned now and if there is any mismatch you will drop packets. my suggestion is leave it default and save yourself the future problems.
  13. Was this fixed for you in the 6.10.0 Stable? I swear it was still broken in the betas but lo and behold, it is correct for me now. Edit: The only thing I did recently would (should) have 0 affect on networking. I needed to (for long story reasons) use a cache drive nvme in another server, and I had another one I was swapping in. So I set the cache to yes so it moves everything to the array. I then stop the array, disable VMs and Docker and start up again so I can move the system folder domain folder. I then replaced the cache drive, made sure it was up and running, and set the cache to prefer so it copies back to the cache. When that's done I set to cache only as it was before, stop the array, enable Docker and VMs and start up again. I then noticed an issue with my 801ad lagg bond and went to network settings to fix and noticed the bond was screwed up, it had dropped my duplicate nic. So I re-did my lagg, set up the vlans again and started the array. Not sure what any of those steps had to do with network setup except that I did have docker and vm services stopped when I rebooted with the new cache drive so potentially that cleared something in the config that docker or vm manager was doing to create the dupe nic.
  14. I just don't touch it and it seems to work OK. Still shows multiple dupes but that's fine as long as you don't try re-arranging them it works.
  15. Ill note that if you want to place unraid behind a load balance/reverse proxy and handle ssl termination yourself, you need to set "Use SSL" to No and you will be all set. Auto isn't really that great as what's the point of a local server if you need to reach out to the internet to use it? Doesn't make sense imho. Use SSL Yes might make sense if you don't have a local domain you are already using but really screws that up if you do, so just be aware, if you want to use this as "unraid.localsite.net or whatnot internally you need to turn off SSL on the unpaid side to get it to redirect properly.