letrain

Members
  • Posts

    96
  • Joined

  • Last visited

Posts posted by letrain

  1. its been a while. I'm not back to wanting to mine in a nice t-rex docker with 1 gpu... only now i'm getting this error.

    20220501 15:15:29 TREX: Can't find nonce with device [ID=0, GPU #0], cuda exception: CUDA_ERROR_OUT_OF_MEMORY

    its is a p2000. i mined just fine before, and have been happily mining with it in windows up until now. its worked fine for ETH mining. its right at the GB limit. i did have to run the special script in windows so its was "headless" and would be 100% free to mine. but i've never had this issue in t-rex and have no idea how to fix.

     

    EDIT:

    UnRaid Version: 6.10.0-rc5 

    so... went to "ptrfrll/nv-docker-trex:cuda10-latest" and it seems to be working. i was on "latest" because nvidia-smi =

     NVIDIA-SMI 495.46       Driver Version: 495.46       CUDA Version: 11.5  

     

    so i was using latest t-rex because cuda 11.

     

    is this going to be an issue going forward? 

  2. 38 minutes ago, SimonF said:

    Thanks for reporting I will look into it. Was working fine on Internal rc3 releases.

    i'm on rc3....  i uploaded my diagnostics just in case. i'm not sure whats going on. i thought it might of been dynamix plugin of some kind. i recently installed his file manager and wasn't sure if something with the gui messed up... i've used your usb plugin before. i was trying out virtualhere so i removed all the usb stuff. now i've readded. is there anything else i can send over that could help? it could just be my server setup? i followed the wiki. i added them to usbip-bind@hardware:id.service instead of 1-1.2 this time. not sure if thats the issue. i can see them just clicking attach nothing happens.

     

    thanks!

     

    edit : just saw fix common problems plugin is throwing an error at me.

    Screenshot 2022-03-13 14.22.20.png

    tower-diagnostics-20220313-1417.zip

    Screenshot 2022-03-13 14.27.19.png

  3. On 1/12/2022 at 7:52 AM, Spritzup said:

    I was experiencing the same issue, but it seems to be resolved now.

    Related though, in the previous version of unmanic, you could set it to create a stereo track from an existing multi-channel track (while keeping the original).  That said, I can't find an option to do that anywhere... does anyone have any ideas?

     

    Thanks!

     

    ~Spritz

    Search the topic for "stereo" you will find the custom repo from last year. Add it and search for clone. I just found it. Might have to restart the docker, or refresh page for it come up. 

  4. On 12/9/2021 at 12:58 AM, ich777 said:

    That seems like a issue with the Motherboard/BIOS that something is not properly implemented.

    Have you turned on RBAR and/or above 4G Decoding in the BIOS?

    I think I don't have to ask but anyways... Are you on the latest BIOS version? :)

     

    Yes, Nvidia "supports" this now "officially": Click

     

    I would also recommend to upgrade to RC2 since I'm on it since day one and have no problems whatsoever and I will also drop the Kernel Helper since you can add nearly everything with plugins.

     

    Please keep in mind the Vendor Reset Patch is just a workaround and will/can fail after n number of reboots from the VM.

     

    Maybe try to boot into UEFI mode because then allocation of the memory regions and talking to the hardware works a little different than with CSM.

     

     

    Went to rc2. Didn't realize it was out yet. 

     

    I saw nvidia did officially support. I saw on the forums here somewhere unraid didn't yet or something. It's working fine without rom. 

     

    Moved to rc2. Still didn't work... So changed up the pcie slots for everything. Then I had to turn on downstream in vm settings as the top two pcie are tied together and the p2000 occupied one it them. 

     

    Well that worked for booting dual gpus in one vm. Awesome. Then Nvidia plugin and reboot still working. 

     

    Somewhere along the way it works now. 

     

    Rc2, downstream, no roms for either card, amd gpu reset plugin (honestly the VM should never reboot unless the server does), Nvidia plugin and my p2000 is happily mining eth with only 5gb of ram :)

     

    thanks for help. I still not sure why the original issue occured. I tried the same solutions. Only difference is rc2 and reset bug plugin. All those combos were tried previously.

     

    BIos is up to date. 4g decode is on, resize bar is not on. When I researched i had seen it wasn't "supported yet". And as far as I know latest bios. Haven't checked in a while. With windows 11 out I'm sure there was an update recently. As everything is working I don't want to bork it.  

     

    as it seems every system is particular. 

    • Like 1
  5. On 12/8/2021 at 9:45 AM, Evo said:

    i can only speak for amd drivers, yes, everytime. ouch. nvidia it does skip over them and aknowledges they are installed now. but amd everytime...my internet connection isn't fast enough ha. be nice if we had an amd plugin that loaded drivers like the nvidia. i see why we don't because not many people use them for transcoding.

     

  6. 16 hours ago, ich777 said:

    What you can also do is to try and enable:

    grafik.png.0eb879ae9972d6bf26970b06a63751d6.png

     

    Had a user with a AMD system which has had also a weird issue, and when enabling Downstream it all began to work.

     

    Oh I forgot do you boot in UEFI or Legacy? I would strongly recommend to boot with Legacy Mode and if you are already booting with Legacy Mode, try UEFI instead.

     

    I tried every which way. I turned on pcie acs both, vfio interrupts, i440fx, q35, multiple versions. Finally I got them both to boot to the same vm once with no plugin. Once.. Then I tried each card individually. That seemed to work best as they would both boot individually, but not together. Started turning things off in vm settings to nail down what I actually needed to have on, off, and settings. Eventually ended up at downstream, since they are both the same immou group I thought it best if I was planning on separate vms; q35, hyper v off...etc.. and the Nvidia plugin is fine, with my p2000 happily mining away. Happy to also report the 2070s is booting fine and restarting without rom file, I thought I saw in the forum unRAID didn't support yet. Only time I've had to use rom is when in testing I had to force stop vm and it wouldn't boot vm without it.

     

    Then my amd vm started having issues, random lockups, having to force shutdown... Then I found your discussion here about rx cards....

     

    I'm wondering if this reset bug has been my issue the whole time. As I said I had this working with both gpus in a VM from March to August this year and your plugin working. Then after moving it was a no go all of a sudden.. amazing how a driver update, windows update (I'm blaming windows 98%) breaks everything. 

     

    I'm kinda at a loss on where exactly the issue was/is.  Not sure how your plugin affects me trying to boot both Nvidia and amd in the same vm.

     

    I'm considering upgrading to rc1 as I don't really want to mess with the kernel. And use the reset plugin. I used the Nvidia modified unRAID version until your plugin came around. I prefer the plugin method for ease of use, and less hassle when upgrading versions, also troubleshooting is easier to remove and add.

     

    I'm booting legacy mode as that was the preferred method for unRAID if you wanted gpu passthrough (and my previous system didn't boot uefi USB), and stub the pcie.

     

    Every system and motherboard can have different things that make it unique. After passing through various gpus, pcie devices, and various vms I finally did some research on my HP z800 and found out that each CPU managed different pcie lanes and slots and completely explain why some vms ran like crap, or had issue with initializing cards. I'm sure this setup has its own as well. I'm going to keep playing around and see if I can get it all flawless here. I want it to be you bless operation and right now it's not. I'll try your suggestions. 

     

    Thank you for the efforts and suggestions. 

    • Like 1
  7. 4 hours ago, ich777 said:

    May I ask why don't you use the iGPU for transcoding, in terms of quality it is a different world to NVENC (Pascal) and I think in terms of the parallel streams it is almost the same if I'm not mistaken.

     

    Can you try to switch over from Q35 to i440fx and see if this makes a difference?

    I also would recommend that you don't use a BIOS file for the Nvidia cards because Nvidia now "officially" "supports" consumer cards in VMs.

     

    I've had the same behavior on the box from my little son, it almost took two minutes to even see a output on the screen and the VM actually started, but I've also heard that on some systems the VM doesn't start regardless of how long they wait.

     

    The igpu is the reason I upgraded my system from a z800. I have a lot of 4k hevc and had weird color issues when I first set this pc up using igpu. I'm trying it again now. I do transcode a bit of 4k because of tablets on the go for the kids. It does seem to handle it. The p2000 was out performing the igpu when I got the system. It seems to be ok now. Honestly I've been mining eth in trex miner docker for a while as it's just sitting there anyways might as well put it work. It was designed to be under load hours on end and it does well. I have no other use for the p2000 other than to set it to mining. 

     

    I had lots of difficulty with the windows vm. I tried both. A bunch. Different versions etc. And couldn't get the card recognized in windows when it booted and couldn't get any drivers to install for amd or Nvidia. I actually put the SSD in another computer got windows updated, drivers, all working then put it all back and couldn't get it to boot up. Removed the plugin and the VM instantly booted. 

     

    I'll try without the bios and try i440fx again and report back. If it boots I'll install your plugin and then see if it still boots. 

     

    Yeah sometimes it pins a core 100% then after 1 minutes drops down and is fine. But with the plugin installed it just pinned and sat there for 10 minutes. 

     

    It's always amazed me how 1 minute everything works fine (had this vm setup before I moved) and you don't touch anything, or change anything... Then all of a sudden it just all stops cooperating. 

  8. One for docker. Two for vm. My amd rx580 and 2070Suped are bound vfio. And I want my p2000 for docker. If I start without plugin vm boots fine- then if I install the plugin and just turn Dockers off and on it works fine. If I reboot however docker works fine with Nvidia,but then vm won't boot. The VM for some reason pins a few CPU cores to 100% allocates ram and stops booting windows and just sits there. It's an odd occurrence. I've done this before on a different setup (hp z800). Every since I switched to the current setup it's been finicky. 

    tower-diagnostics-20211207-1138.zip

     

     

    Sorry about the screenshot. Wasn't too sure how to show a VM that hangs as the log had no errors. And do I screenshot with the plugin installed? Or not? And with no plugin , no nvidia-smi. :) I can get some screenshots if you still need them. 

  9. Not sure if this correct location... It appears on 6.9.2 the Nvidia plugin breaks gpu passthrough to windows 10 vm. Without the plugin passthrough of my 2070super works fine. With the plugin installed it does vm hangs. 

  10. just installed.. every time the container starts it says driver mismatch and has to download drivers. i have the nvidia-plugin drivers installed that are listed in the op. the one in the docker is higher and says it has to download the lower version i have installed..

     

    -Trying to get Nvidia driver version---
    ---Successfully got driver version: 460.84---
    ---Checking Xwrapper.config---
    ---Configuring Xwrapper.config---
    ---Driver version missmatch, currently installed: v465.19.01, driver on Host: v460.84---
    ---Downloading and installing Nvidia Driver v460.84---

     

    op says not to use 465...

  11. On 5/18/2021 at 10:03 AM, PTRFRLL said:

     

    Thanks for the link for NSFMiner, I hadn't seen that container before. I'll see if I can integrate the OC capabilities into this image as well.

    Any luck? I love your container for my p2000. But running a VM for my 2070 super. 

  12. On 4/13/2021 at 6:22 AM, lnxd said:

    The HDMI dummy will still make it fluctuate 😅 I have one too, and when connected it bounces up and down +-5w. If you're just using it for Windows, check out VirtIO Guest Tools if you haven't already, it's an install package on the VirtIO Drivers ISO.

     

    Yup it'd be bound at the moment if you're using it in a VM. It could be better for Plex yeah, but if you're using Quick Sync to transcode h265 content the Intel might be marginally more efficient from hardware HEVC decoding support so could go either way too 😂 If most of your content is h264 or AV1 the Nvidia card will probably be better.
     

    If I was you, I'd definitely test it out and see how you go. Just keep in mind that while you can probably 1-click OC your card in Windows, you'll have to tune your OC manually using PhoenixMiner arguments if you want to OC for mining. And I don't know if that will work with Nvidia cards for my container. It does for AMD cards.

     

    Not to say I wouldn't love to have an Nvidia card, it's just that horrid shortage haha.

     

    EDIT: I just saw your update, sorry! Yeah I think for ages T-Rex Miner was the only one on there, then I made this, I think NfsminerOC was under development at the same time cos it showed up very soon after, and then I put XMRig on there as well. I also have a working container for Lolminer, and it's not a lot of work to change PhoenixMiner out for any GPU miner really. But I don't want to fill up CA with so many alternatives; just the best 😁

    I'd be interested in lolminer. In windows at least it seems to keep temps down but hashrate up

  13. I tried searching and maybe I'm just not searching correctly. But I can't seem to get my rx580 8gb over 24 m/h. I can get 27-30 in windows and still have the same heat and power consumption. I did put the -acm tag. And tried some other. I also have my 2070super mining as well. I understand there your docker doesn't officially offer nvidia support. I can hit 70 m/h combined with the same temps in windows. But I'm stuck at 62 m/h with your docker. Power consumption doesn't matter as my electricity is included in rent no matter how much I use. Any help would be appreciated. I like your docker. I think it's better then gpu passthrough as I can use my 2070 for plex as well. 

  14. 2 minutes ago, John_M said:

     

    It's immediately above the Routing Table section:

     

    1617211216_ScreenShot2021-03-30at22_26_32.png.67ae37c0abfaadc75466dcab4d1a065c.png

     

    but since you're not seeing it you'll have to free up those other NICs temporarily so that Unraid can see them, then that section should appear. So go to Tools -> System Devices and un-stub them (you'll need to stop the VM that uses them first, of course). Then re-boot, navigate to the Settings -> Network Settings page again and reassign eth0 to the appropriate MAC address. Then you can go back and re-stub them, re-boot and start the VM.

     

     

    that sounds good. i will try that and report back.

  15. 3 hours ago, John_M said:

    According to your diagnostics you have five Ethernet ports installed. Unraid is only using one of them, eth1, the rest are stubbed and presumably being used by your pfsense VM. Your problem is that Unraid expects to use eth0 for its GUI. You need to rearrange your NICs so that the 2.5gb NIC is eth0, not eth1. You can do that on the Settings -> Network Settings page of the GUI in the Interface Rules section.

    Rules section? i'm not seeing that. just both interfaces, and eth0 has no mac address...just a yes / no for mac address. if i change it to yes, it offers bonding modes...eth1 is the correct mac address for the 2.6gbps ethernet... i've had a server with two nics before i could switch them around by mac address. i still don't understand why i have eth0 and eth1...

    Quote