KptnKMan

Members
  • Posts

    231
  • Joined

  • Last visited

Everything posted by KptnKMan

  1. I'm testing Virtiofs on my main Win11 VM, and having quite some success, but with a few caveats. Firstly, while installing the latest virtio drivers, my VM IMMEDIATELY CRASHES. I found through lots of trial and crashes that if I leave "Fwcfg" DISABLED, the install finishes. Disabling it allows install of updated drivers, sets up services, and does not crash the VM. What is this Fwcfg, can anyone shed some light on that? Secondly, I'm trying to setup 2 virtiofs disks, but only 1 appears as the Z: drive If I setup 2 drives, only the second appears as Z: when the VM boots. In this example, I setup "vdata1" and "vdata2": Thirdly, the virtiofs is always set as Z: I cannot find or see any way to reassign the drive letter, or indeed use multiple drive letters. Is there a way to change the assigned drive?
  2. I am assuming that you rebooted the server during this operation? Ie: shut down, remove disk, format in another machine, repace in original machine, startup? Since this is marked as the solution, I wonder if its the reboot that is important? Also, is there a way to reset this without rebooting?
  3. Oh thanks for this solution, I've been getting similar issue where I replaced/upgraded my NVME cache with a larger drive, and reassigned the existing cache disk to an Unassigned disk. As a result, for some reason the "old" cache disk (now 2nd NVME) would show up with the option to FORMAT, but after that would only present the option to Preclear, which I did, but then I cannot utilise it as an Unassigned disk because the MOUNT option never becomes usable: After formatting it and Preclearing it a few times, nothing seemed to be working. I also cleared the disk a few times, deleting all partitions. After clearing the disk a final time and performing a full system reboot, it seemed to become usable again: Seems like a odd bug somewhere, maybe?
  4. I'm using a "Gigabyte RTX3090 Turbo 24G". Full system specs are in my signature, this is on UNRAID1. I flashed this card with the updated UEFI bios some time ago, and have a dumped & hexed BIOS of the same card that I use to boot VMs.
  5. Thanks for the response. I'm still struggling with getting any VMs or even "unraid with GUI" to start (the server starts, but the local web gui shows a black screen with blinking cursor). Booting a VM just shows a black screen. I see this in the logs when I start a VM: Oct 8 19:01:50 unraid1 kernel: vfio-pci 0000:0c:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered blocking state Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered disabled state Oct 8 19:01:50 unraid1 kernel: device vnet0 entered promiscuous mode Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered blocking state Oct 8 19:01:50 unraid1 kernel: br0: port 2(vnet0) entered forwarding state Oct 8 19:01:52 unraid1 avahi-daemon[16989]: Joining mDNS multicast group on interface vnet0.IPv6 with address ipv6addresshere. Oct 8 19:01:52 unraid1 avahi-daemon[16989]: New relevant interface vnet0.IPv6 for mDNS. Oct 8 19:01:52 unraid1 avahi-daemon[16989]: Registering new address record for ipv6addresshere on vnet0.*. Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x1e@0x258 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x19@0x900 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x26@0xc1c Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x27@0xd00 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: vfio_ecap_init: hiding ecap 0x25@0xe00 Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.0: BAR 1: can't reserve [mem 0x7000000000-0x77ffffffff 64bit pref] Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.1: enabling device (0000 -> 0002) Oct 8 19:01:53 unraid1 kernel: vfio-pci 0000:0c:00.1: vfio_ecap_init: hiding ecap 0x25@0x160 Does anyone have an idea what's going on or where I can possibly investigate? Of all of these everything is set and enabled. If I leave everything the same and boot unraid as non-uefi mode, the local GUI and VMs work, but there is no reBAR enabled.
  6. I'm just adding my plea again if anyone knows anything at all on how to resolve this. Is there anything known in the just-released unRAID 6.11.0 that aids in this? Also, I am not aware or could find of any hardware-specific oddities with enabling ReBAR without CSM and the black screen issues, is there something that anyone might be able to highlight? Is the QEMU rea-only ReBAR disabled issue only applicable to certain hardware or AMD-only setups, or something else? If I may, @alturismo what hardware are you using? Intel CPU? Just been trying to get this to work for a long time now, and most other things work just fine, just not this. My hardware is in my sig, not sure what the issue is.
  7. I'm just responding to report that I've been running 6.10.3 stable for some time now, and no issues have been noticed with the Connectix-3 cards swapping around and acting strange. I'm very grateful to the Unraid Developers for attention on this issue and forum mods for making this space available.
  8. I had this strange issue today when trying to shutdown my system due to some issues. I founc that these commands worked: killall -Iv docker killall -Iv containerd umount -l /dev/loop2 Its only started happening recently, and is the /mnt/cache unable to unmount. It seems that it's only Docker-related that is causing this issue, for me at least. I'm going to put this into a UserScripts script, in case I need it again, then I can fire it off.
  9. Hi, thanks for the info. Actually I did get a chance to upgrade to 6.10.3 on both systems, almost exactly this time yesterday. I noted a couple things: - The Network Interface Rules dialogue returned (yay!) - The nics in my systems are setup as eth0 (mlx4_core), eth1 (onboard), eth2 (mlx4_core) and that seemed to not drift to eg eth3 (yay!) - I did 3 consecutive reboots (on both systems) and the config seemed to stick (yay!) I'm not calling this "fixed" just yet, as I need to investigate a couple other things and test, but it looks good so far. 🙂 Also, my nics show up as the identical same card you have.
  10. I've found that splitting them up as Mellanox-eth0, onboard-eth1, Mellanox-eth2 produces the most consistent results. See the screenshots I posted earlier, this seems to be working now as it did quite consistently for me in previous releases. Yeah, we'll see when that happens, I'm not trying to rush anyone. I'm just trying to work an angle that I know "reliably", rather than test a new workaround. Nothing is perfect. As an aside though, the networking issues since I upgraded to 10Gbit have put things on hold for about as long as I've had 10Gbit now. All I wanted was to setup a working 10Gbit(Mellanox)/1Gbit(Onboard) failover bond, but that seems too much to ask. These days I just want a stable server and single 10Gbit connection that will persist reboots. Details of that journey in my other long thread.
  11. Well, I rebooted and the same issue reappeared same as if nothing happened. I hacked the network-rules.cfg manually with the correct interface IDs, kernel modules and hardware addresses... and it reboots fine now. I've rebooted the system 3 times in a row now just to see if something drifts, but its ok... for now. How mine is supposed to look in my config: I understand what you're saying, but I don't have time to verify 6.10.3 right now. I only rebooted to add something to the system, and then all hell broke loose. I know the issues with 6.10.1 at this time, and I thought I knew 6.10.2, but I was wrong there as usual. I'm sticking on 6.10.1 until a stable 6.10.3 comes out. In my experience, this doesn't match the behaviour at all. I'm using dual-port Mellanox CX-3 cards in both my servers and can verify the behaviour. What seems to happen quite consistently, and as I've documented extensively on this forum in threads, is that the first Mellanox interface seems to be fine but the second appears to be created twice. Then some kind of cleanup happens and a gap is left. THat process of creating/removing the second interface seems to mess up other assignments. If the Mellanox dual-port card is assigned last, it doesn't seem to have the issue as far as I can tell, but in unRAID if you want the Mellanox MAC as bond MAC then it needs to be the first MAC on eth0. 😐 So in my experience, on both my servers, eth0 has never been the issue if the first Mellanox is set to eth0. I could be wrong here, but I'm just saying what happened to me.
  12. Thanks for the advice. This secondary system is my "stable" unRAID that I basically never mess with, so I'm not keen on non-stable releases here. I already downgraded to 6.10.1, and both the interface rules and interface-rules.cfg have reappeared. I saw the downgrade worked for people in this thread: However, now I have this (THe dropdown shows duplicates): I'm going to delete the interface-rules and reboot to see how that helps.
  13. This is a nightmare, I just downgraded to 6.10.1 and the interface assignments and network-rules.cfg has reappeared but now other issues. I'll document in the linked thread, I try not to hijack other peoples support threads with my issues. Thanks for the advice.
  14. Well, looks like I might need to downgrade. ☹️ I'm seeing this same issue, and now I'm stuck using 6.10.2.
  15. Well, this just became a major issue for me today. Rebooted my secondary, and the cards swapped around again, so that the onboard is eth0, with the mellanox as eth1 and eth2. This also means that the bond MAC address is now my onboard, which is not good. My assignments use the Mellanox card. While this is happening, I still have no access to the interface rules dialogue, so now everything is messed up and I cannot switch it back. Also checking the system, I can see that the network-interfaces.cfg file is not created. @bonienl I realise that this is not like a support ticket or something but this is supposed to be fixed? I don't know what to do. I'm kinda stuck here. Any advice?
  16. Ah, I thought vmx was used across Intel and AMD, but it was a question of support. Like a common layer on top of VT-X/AMD-V, but alas I am mistaken. As usual, there is a separate name and term for a similar technology between Intel/AMD. ¯\_(ツ)_/¯ I also looked into KVM support for VMX/SVM and found there's quite a bit of information, like here and here. As for guest support, and guest awareness of VMX/SVM extensions, that seems to be another layer of issues that (In this case) has had some progress in Windows 10/11. Using Docker and VMs within Windows 10 has given some excitement to test Windows 11 again, now that the new OVMF-TPM BIOS is generally released in unRAID 6.10.x . Time will tell how reliable that is, but I actually found Windows 11 to be faster in VM than Windows 10, before the TPM limitations ended the fun.
  17. A quick followup on this... I noticed that the VM is not just more loaded, but was noticeably more "sluggish" and slow generally. I tried and failed to convert the VM to SeaBIOS, to test comparable performance, and while attempting this I noticed this error would appear each time I tried to update the GUI while the vmx CPU flag was in place (Removing the flag avoids the error): So eventually, I made a backup and removed the vmx CPU flag, expecting my VM to die or brick or something strange. Nothing happened, Hyper-V works within the VM, Docker works, WSL2 works, and it "seems" to be a little snappier. I'm not entirely sure what is happening here (It could be more related to updates in 6.10.x), but I thought I would post it just so other know, the "vmx" CPU flag might not be required after all. ¯\_(ツ)_/¯
  18. So I got it working. I found that KVM already had AMD extensions and nested Virtualisation enabled: root@primary:~# modprobe -r kvm_amd modprobe: FATAL: Module kvm_amd is in use. root@primary:~# systool -m kvm_amd -v | grep nested nested = "1" root@primary:~# Enabling the nested module did nothing (As expected): root@primary:~# modprobe kvm_amd nested=1 root@primary:~# I've had issues with this before, because I remember last year I enabled Docker extensions in Visual Studio Code and bricked my VM (Restored from nightly backup so no big deal) but I never tried that again. So I checked my VMs, and per advice around the forums I added the vmx CPU flag: <cpu mode='host-passthrough' check='none' migratable='on'> ... <feature policy='require' name='vmx'/> ... </cpu> ...and started up my VM, installed Hyper-V and Docker and got it running in WSL2. No errors in Device Manager were seen. I didn't even need to reboot because the nested extensions were enabled. For good measure, I downloaded SpaceInvaderOne's script and enabled it (And fixed it because there is an error in there on line 20), but it is mostly redundant because the extensions are already enabled. Still, gives me a little more control if I want it in future. I have to say though, the performance took a hit. I've seen reports that SeaBIOS is more performant, but I'd rather stay with OVMF if I can. Just looking at Task Manager was a big oof! I'm going to have to assign more resources to this VM! Thanks for all your help!
  19. Thanks, I'm gonna try this. Will report results.
  20. Hi, was this ever resolved to run nested virtualisation on AMD? I've been trying to get Docker to work within a Windows 10 VM, any advice?
  21. SO I upgraded both my unRAID main systems to 6.10.2 and some issues were encountered. It seems that the duplicate interfaces is still the more-or-less same as what I saw in 6.10.1 but I can't really verify. On primary unRAID1, looks like the upgrade went without notice, and everything came up eth0 (mlx4_core), eth2 (mlx4_core), eth3 (r8169). I'm not going to complain about that because there seems to be no duplicate, but the skipping eth1 is still present. As long as it works, I'm not fussed. On secondary unRAID2, the upgrade seems to have reset and swapped around my interfaces to eth0 (igc), eth1 (mlx4_core) and eth2 (mlx4_core). The main unraid screen booted up with both ipv4 and ipv6 as "not set" and I couldn't seem to change the interface order, and had to reboot into safe mode to swap the interfaces around to eth0 (mlx4_core), eth1 (mlx4_core) and eth2 (igc). After this, after normal bootup, the interface rules selection is completely missing from the my network settings, as the network-rules.cfg file disappeared. I read on another page that by creating a blank network-rules.cfg and rebooting would fix the problem, but that new network-interfaces.cfg file is gone as well and I'm still stuck without any interface rules selection. I also tried to disable bridging then bonding, then both, to see if it would trigger the appearance of interface rules, but nothing. So in the end, my primary unRAID has the odd interface numbering, but seems to work, and the secondary unRAID has normal interface numbering but suddenly a missing network interfaces dialogue, and I'm not sure how to fix. @bonienl if I may ask, I saw that you notified of the fixes in the other related thread, do you have any idea what's happening? Any idea how I can force get access to the network interfaces dialogue? I definitely have multiple interfaces installed and listed. I've added diagnostic files if that helps. primary-diagnostics-20220529-1351.zipsecondary-diagnostics-20220529-1353.zip
  22. Oh, then I might recommend if you use Assetto Corsa you give it a try, it's quite amazing. AC Content Manager is the definitive frontend for Assetto Corsa, used by basically everyone at this point. It basically entirely replaces the frontend of Assetto Corsa, and managed all resources and game files for you like installing cars, tracks, graphical plugins (Custom Shader Mod, Sol, etc), modules, and basically all game artifacts. As far as I know, this option within Server Manager (Content Manager Wrapper) allows the server to expose more information than the regular AcServer, and so provides Content Manager clients with added metadata. Like I said using Content Manager is the definitive way to play Assetto Corsa: I hope that's helpful. Anyway, thanks I'll take a look at exposing port 9679 and using that as a default for Content Manager Wrapper. Is this something you might consider adding to the container as an exposed port?
  23. Yeah thanks, I've been looking through there already too. There seems to be no definitive answer, the only thing I can find is this screenshot with port 9679 in it:
  24. No sorry, I mean that within Server Manager, in Server->Options there is (bout halfway down) an option for "Enable Content Manager Wrapper" and "Content Manager Wrapper Port" like this: I'm not sure if this is assumed already within the container of if I need to specify it and make new port? Also, I'm struggling to find anywhere if there's a recommended port for this.
  25. Hi I had another question about the Assetto Corsa server, regarding the Content Manager Wrapper extensions for Server Manager. Is there a default port assumed for this in the container, or do I need to externalise additional ports to make this work? Do you know if there is a recommended port for Content Manager Wrapper?