letrain

Members
  • Posts

    58
  • Joined

  • Last visited

Recent Profile Visitors

450 profile views

letrain's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. I tried every which way. I turned on pcie acs both, vfio interrupts, i440fx, q35, multiple versions. Finally I got them both to boot to the same vm once with no plugin. Once.. Then I tried each card individually. That seemed to work best as they would both boot individually, but not together. Started turning things off in vm settings to nail down what I actually needed to have on, off, and settings. Eventually ended up at downstream, since they are both the same immou group I thought it best if I was planning on separate vms; q35, hyper v off...etc.. and the Nvidia plugin is fine, with my p2000 happily mining away. Happy to also report the 2070s is booting fine and restarting without rom file, I thought I saw in the forum unRAID didn't support yet. Only time I've had to use rom is when in testing I had to force stop vm and it wouldn't boot vm without it. Then my amd vm started having issues, random lockups, having to force shutdown... Then I found your discussion here about rx cards.... I'm wondering if this reset bug has been my issue the whole time. As I said I had this working with both gpus in a VM from March to August this year and your plugin working. Then after moving it was a no go all of a sudden.. amazing how a driver update, windows update (I'm blaming windows 98%) breaks everything. I'm kinda at a loss on where exactly the issue was/is. Not sure how your plugin affects me trying to boot both Nvidia and amd in the same vm. I'm considering upgrading to rc1 as I don't really want to mess with the kernel. And use the reset plugin. I used the Nvidia modified unRAID version until your plugin came around. I prefer the plugin method for ease of use, and less hassle when upgrading versions, also troubleshooting is easier to remove and add. I'm booting legacy mode as that was the preferred method for unRAID if you wanted gpu passthrough (and my previous system didn't boot uefi USB), and stub the pcie. Every system and motherboard can have different things that make it unique. After passing through various gpus, pcie devices, and various vms I finally did some research on my HP z800 and found out that each CPU managed different pcie lanes and slots and completely explain why some vms ran like crap, or had issue with initializing cards. I'm sure this setup has its own as well. I'm going to keep playing around and see if I can get it all flawless here. I want it to be you bless operation and right now it's not. I'll try your suggestions. Thank you for the efforts and suggestions.
  2. The igpu is the reason I upgraded my system from a z800. I have a lot of 4k hevc and had weird color issues when I first set this pc up using igpu. I'm trying it again now. I do transcode a bit of 4k because of tablets on the go for the kids. It does seem to handle it. The p2000 was out performing the igpu when I got the system. It seems to be ok now. Honestly I've been mining eth in trex miner docker for a while as it's just sitting there anyways might as well put it work. It was designed to be under load hours on end and it does well. I have no other use for the p2000 other than to set it to mining. I had lots of difficulty with the windows vm. I tried both. A bunch. Different versions etc. And couldn't get the card recognized in windows when it booted and couldn't get any drivers to install for amd or Nvidia. I actually put the SSD in another computer got windows updated, drivers, all working then put it all back and couldn't get it to boot up. Removed the plugin and the VM instantly booted. I'll try without the bios and try i440fx again and report back. If it boots I'll install your plugin and then see if it still boots. Yeah sometimes it pins a core 100% then after 1 minutes drops down and is fine. But with the plugin installed it just pinned and sat there for 10 minutes. It's always amazed me how 1 minute everything works fine (had this vm setup before I moved) and you don't touch anything, or change anything... Then all of a sudden it just all stops cooperating.
  3. One for docker. Two for vm. My amd rx580 and 2070Suped are bound vfio. And I want my p2000 for docker. If I start without plugin vm boots fine- then if I install the plugin and just turn Dockers off and on it works fine. If I reboot however docker works fine with Nvidia,but then vm won't boot. The VM for some reason pins a few CPU cores to 100% allocates ram and stops booting windows and just sits there. It's an odd occurrence. I've done this before on a different setup (hp z800). Every since I switched to the current setup it's been finicky. tower-diagnostics-20211207-1138.zip Sorry about the screenshot. Wasn't too sure how to show a VM that hangs as the log had no errors. And do I screenshot with the plugin installed? Or not? And with no plugin , no nvidia-smi. I can get some screenshots if you still need them.
  4. Not sure if this correct location... It appears on 6.9.2 the Nvidia plugin breaks gpu passthrough to windows 10 vm. Without the plugin passthrough of my 2070super works fine. With the plugin installed it does vm hangs.
  5. so it has to install the drivers for amd drivers and nvidia drivers everytime the container starts? i have nvidia plugin installed. and radeon top... everytime i start the contrainer it downloads both drivers...
  6. So I see in the settings now there is lots of nice options. I see gpu core clock, is there a way to adjust the memory? I'd much rather use this lovely docker instead of a windows vm...
  7. just installed.. every time the container starts it says driver mismatch and has to download drivers. i have the nvidia-plugin drivers installed that are listed in the op. the one in the docker is higher and says it has to download the lower version i have installed.. -Trying to get Nvidia driver version--- ---Successfully got driver version: 460.84--- ---Checking Xwrapper.config--- ---Configuring Xwrapper.config--- ---Driver version missmatch, currently installed: v465.19.01, driver on Host: v460.84--- ---Downloading and installing Nvidia Driver v460.84--- op says not to use 465...
  8. Any luck? I love your container for my p2000. But running a VM for my 2070 super.
  9. I'd be interested in lolminer. In windows at least it seems to keep temps down but hashrate up
  10. I tried searching and maybe I'm just not searching correctly. But I can't seem to get my rx580 8gb over 24 m/h. I can get 27-30 in windows and still have the same heat and power consumption. I did put the -acm tag. And tried some other. I also have my 2070super mining as well. I understand there your docker doesn't officially offer nvidia support. I can hit 70 m/h combined with the same temps in windows. But I'm stuck at 62 m/h with your docker. Power consumption doesn't matter as my electricity is included in rent no matter how much I use. Any help would be appreciated. I like your docker. I think it's better then gpu passthrough as I can use my 2070 for plex as well.
  11. Rules section? i'm not seeing that. just both interfaces, and eth0 has no mac address...just a yes / no for mac address. if i change it to yes, it offers bonding modes...eth1 is the correct mac address for the 2.6gbps ethernet... i've had a server with two nics before i could switch them around by mac address. i still don't understand why i have eth0 and eth1...
  12. i also had an issue with some dockers saying they don't exist. Here is diagnotics. tower-diagnostics-20210330-0810.zip edit : just rebooted and its showing one eth again...i'm not sure whats going on. i didn't do anything.
  13. i recently moved so server has been off for 14 days. for some reason unraid is showing eth0 and eth1 and i only have one eth port. i've deleted network.cfg a couple of times, and it works after one reboot, then fails on the next. and gives me two eth again. i don't have internet access unless i enable bonding..which doesn't make since as i have only one eth.