KptnKMan

Members
  • Posts

    223
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

KptnKMan's Achievements

Explorer

Explorer (4/14)

26

Reputation

  1. Hi, thanks for the info. Actually I did get a chance to upgrade to 6.10.3 on both systems, almost exactly this time yesterday. I noted a couple things: - The Network Interface Rules dialogue returned (yay!) - The nics in my systems are setup as eth0 (mlx4_core), eth1 (onboard), eth2 (mlx4_core) and that seemed to not drift to eg eth3 (yay!) - I did 3 consecutive reboots (on both systems) and the config seemed to stick (yay!) I'm not calling this "fixed" just yet, as I need to investigate a couple other things and test, but it looks good so far. 🙂 Also, my nics show up as the identical same card you have.
  2. I've found that splitting them up as Mellanox-eth0, onboard-eth1, Mellanox-eth2 produces the most consistent results. See the screenshots I posted earlier, this seems to be working now as it did quite consistently for me in previous releases. Yeah, we'll see when that happens, I'm not trying to rush anyone. I'm just trying to work an angle that I know "reliably", rather than test a new workaround. Nothing is perfect. As an aside though, the networking issues since I upgraded to 10Gbit have put things on hold for about as long as I've had 10Gbit now. All I wanted was to setup a working 10Gbit(Mellanox)/1Gbit(Onboard) failover bond, but that seems too much to ask. These days I just want a stable server and single 10Gbit connection that will persist reboots. Details of that journey in my other long thread.
  3. Well, I rebooted and the same issue reappeared same as if nothing happened. I hacked the network-rules.cfg manually with the correct interface IDs, kernel modules and hardware addresses... and it reboots fine now. I've rebooted the system 3 times in a row now just to see if something drifts, but its ok... for now. How mine is supposed to look in my config: I understand what you're saying, but I don't have time to verify 6.10.3 right now. I only rebooted to add something to the system, and then all hell broke loose. I know the issues with 6.10.1 at this time, and I thought I knew 6.10.2, but I was wrong there as usual. I'm sticking on 6.10.1 until a stable 6.10.3 comes out. In my experience, this doesn't match the behaviour at all. I'm using dual-port Mellanox CX-3 cards in both my servers and can verify the behaviour. What seems to happen quite consistently, and as I've documented extensively on this forum in threads, is that the first Mellanox interface seems to be fine but the second appears to be created twice. Then some kind of cleanup happens and a gap is left. THat process of creating/removing the second interface seems to mess up other assignments. If the Mellanox dual-port card is assigned last, it doesn't seem to have the issue as far as I can tell, but in unRAID if you want the Mellanox MAC as bond MAC then it needs to be the first MAC on eth0. 😐 So in my experience, on both my servers, eth0 has never been the issue if the first Mellanox is set to eth0. I could be wrong here, but I'm just saying what happened to me.
  4. Thanks for the advice. This secondary system is my "stable" unRAID that I basically never mess with, so I'm not keen on non-stable releases here. I already downgraded to 6.10.1, and both the interface rules and interface-rules.cfg have reappeared. I saw the downgrade worked for people in this thread: However, now I have this (THe dropdown shows duplicates): I'm going to delete the interface-rules and reboot to see how that helps.
  5. This is a nightmare, I just downgraded to 6.10.1 and the interface assignments and network-rules.cfg has reappeared but now other issues. I'll document in the linked thread, I try not to hijack other peoples support threads with my issues. Thanks for the advice.
  6. Well, looks like I might need to downgrade. ☹️ I'm seeing this same issue, and now I'm stuck using 6.10.2.
  7. Well, this just became a major issue for me today. Rebooted my secondary, and the cards swapped around again, so that the onboard is eth0, with the mellanox as eth1 and eth2. This also means that the bond MAC address is now my onboard, which is not good. My assignments use the Mellanox card. While this is happening, I still have no access to the interface rules dialogue, so now everything is messed up and I cannot switch it back. Also checking the system, I can see that the network-interfaces.cfg file is not created. @bonienl I realise that this is not like a support ticket or something but this is supposed to be fixed? I don't know what to do. I'm kinda stuck here. Any advice?
  8. Ah, I thought vmx was used across Intel and AMD, but it was a question of support. Like a common layer on top of VT-X/AMD-V, but alas I am mistaken. As usual, there is a separate name and term for a similar technology between Intel/AMD. ¯\_(ツ)_/¯ I also looked into KVM support for VMX/SVM and found there's quite a bit of information, like here and here. As for guest support, and guest awareness of VMX/SVM extensions, that seems to be another layer of issues that (In this case) has had some progress in Windows 10/11. Using Docker and VMs within Windows 10 has given some excitement to test Windows 11 again, now that the new OVMF-TPM BIOS is generally released in unRAID 6.10.x . Time will tell how reliable that is, but I actually found Windows 11 to be faster in VM than Windows 10, before the TPM limitations ended the fun.
  9. A quick followup on this... I noticed that the VM is not just more loaded, but was noticeably more "sluggish" and slow generally. I tried and failed to convert the VM to SeaBIOS, to test comparable performance, and while attempting this I noticed this error would appear each time I tried to update the GUI while the vmx CPU flag was in place (Removing the flag avoids the error): So eventually, I made a backup and removed the vmx CPU flag, expecting my VM to die or brick or something strange. Nothing happened, Hyper-V works within the VM, Docker works, WSL2 works, and it "seems" to be a little snappier. I'm not entirely sure what is happening here (It could be more related to updates in 6.10.x), but I thought I would post it just so other know, the "vmx" CPU flag might not be required after all. ¯\_(ツ)_/¯
  10. So I got it working. I found that KVM already had AMD extensions and nested Virtualisation enabled: root@primary:~# modprobe -r kvm_amd modprobe: FATAL: Module kvm_amd is in use. root@primary:~# systool -m kvm_amd -v | grep nested nested = "1" root@primary:~# Enabling the nested module did nothing (As expected): root@primary:~# modprobe kvm_amd nested=1 root@primary:~# I've had issues with this before, because I remember last year I enabled Docker extensions in Visual Studio Code and bricked my VM (Restored from nightly backup so no big deal) but I never tried that again. So I checked my VMs, and per advice around the forums I added the vmx CPU flag: <cpu mode='host-passthrough' check='none' migratable='on'> ... <feature policy='require' name='vmx'/> ... </cpu> ...and started up my VM, installed Hyper-V and Docker and got it running in WSL2. No errors in Device Manager were seen. I didn't even need to reboot because the nested extensions were enabled. For good measure, I downloaded SpaceInvaderOne's script and enabled it (And fixed it because there is an error in there on line 20), but it is mostly redundant because the extensions are already enabled. Still, gives me a little more control if I want it in future. I have to say though, the performance took a hit. I've seen reports that SeaBIOS is more performant, but I'd rather stay with OVMF if I can. Just looking at Task Manager was a big oof! I'm going to have to assign more resources to this VM! Thanks for all your help!
  11. Thanks, I'm gonna try this. Will report results.
  12. Hi, was this ever resolved to run nested virtualisation on AMD? I've been trying to get Docker to work within a Windows 10 VM, any advice?
  13. SO I upgraded both my unRAID main systems to 6.10.2 and some issues were encountered. It seems that the duplicate interfaces is still the more-or-less same as what I saw in 6.10.1 but I can't really verify. On primary unRAID1, looks like the upgrade went without notice, and everything came up eth0 (mlx4_core), eth2 (mlx4_core), eth3 (r8169). I'm not going to complain about that because there seems to be no duplicate, but the skipping eth1 is still present. As long as it works, I'm not fussed. On secondary unRAID2, the upgrade seems to have reset and swapped around my interfaces to eth0 (igc), eth1 (mlx4_core) and eth2 (mlx4_core). The main unraid screen booted up with both ipv4 and ipv6 as "not set" and I couldn't seem to change the interface order, and had to reboot into safe mode to swap the interfaces around to eth0 (mlx4_core), eth1 (mlx4_core) and eth2 (igc). After this, after normal bootup, the interface rules selection is completely missing from the my network settings, as the network-rules.cfg file disappeared. I read on another page that by creating a blank network-rules.cfg and rebooting would fix the problem, but that new network-interfaces.cfg file is gone as well and I'm still stuck without any interface rules selection. I also tried to disable bridging then bonding, then both, to see if it would trigger the appearance of interface rules, but nothing. So in the end, my primary unRAID has the odd interface numbering, but seems to work, and the secondary unRAID has normal interface numbering but suddenly a missing network interfaces dialogue, and I'm not sure how to fix. @bonienl if I may ask, I saw that you notified of the fixes in the other related thread, do you have any idea what's happening? Any idea how I can force get access to the network interfaces dialogue? I definitely have multiple interfaces installed and listed. I've added diagnostic files if that helps. primary-diagnostics-20220529-1351.zipsecondary-diagnostics-20220529-1353.zip
  14. Oh, then I might recommend if you use Assetto Corsa you give it a try, it's quite amazing. AC Content Manager is the definitive frontend for Assetto Corsa, used by basically everyone at this point. It basically entirely replaces the frontend of Assetto Corsa, and managed all resources and game files for you like installing cars, tracks, graphical plugins (Custom Shader Mod, Sol, etc), modules, and basically all game artifacts. As far as I know, this option within Server Manager (Content Manager Wrapper) allows the server to expose more information than the regular AcServer, and so provides Content Manager clients with added metadata. Like I said using Content Manager is the definitive way to play Assetto Corsa: I hope that's helpful. Anyway, thanks I'll take a look at exposing port 9679 and using that as a default for Content Manager Wrapper. Is this something you might consider adding to the container as an exposed port?
  15. Yeah thanks, I've been looking through there already too. There seems to be no definitive answer, the only thing I can find is this screenshot with port 9679 in it: