truthfulie

Members
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

0 Neutral

About truthfulie

  • Rank
    Newbie
  1. Tried your settings and I'm only getting 50 MH/s. I need to give it more power. Managed to get 60.3 MH/s with 120W (like I was originally getting on Windows) but it wouldn't hold. I am now back to 135W power limit and getting consistent hashrate of 60.4. Probably because mine is FE not AIB card. In any case, I am having the same issue as Rolucious. Updated the docker and the miner would not work. I was on older v460.67 driver (just because I was reading the older post on this thread when I first set it up) Updated to the latest driver to see if that was the cause, it wasn't. Set it
  2. Just an update. Been playing around with the settings a bit over the weekend and I managed to get the 3070 FE to 60.45MH with lower power usage. Settings pl: 135 clock offset: -503 memory offset: 2300 fan: 70 130w can get to 60 but was struggling to keep it locked at 60. Would dip to 59 here and there. So I just gave it 5 more watts. More efficient than 150 I was using before.
  3. Interesting. I have not checked the power usage with the same tool. I've checked with nvidia-smi with unraid and I've only tested nicehash's own miner's (when the card was plugged into a windows machine) reporting which was kind of nice. It gives wattage and efficiency rating based on wattage vs hashrate. Anyway now I mine ETH directly with a pool. 20w ain't nothing but it's also not big enough difference in power bill for me to worry about it too much. I'll fiddle with OC settings a bit more. PS. which specific 3070 model did you write those settings? I am running FE model.
  4. Been able to get 60mh with 3070 just on 130watt through Windows. Any idea why the container needs extra 20w to get the same hashrate?
  5. I cannot seem to get additional arguments to work for me. I want manually set fixed fan percentage, set power limit, memory OC and under-clock the core. The parameters does show up on the log when I run the docker. But GPU Stats is showing same full power draw and same stock fan curve behavior. Any idea what I might be doing wrong? EDIT: I did manage to get the power limit to work. But I'm still not getting fan control clock offset. I have checked to make sure docker is running on privilege mode. But fans still seems to be using the stock settings and my mem
  6. I solved it by just setting network type to custom: br0. I don't have webUI in the dropdown but i can just use the url. The only thing is I won't be able to access it outside of my local network. Any ideas how to fix?
  7. I'm having some trouble getting this up and running. Had some issue with default template values, the webUI would not load. Deleted the server port value, added a new one. container port 80 host port 8282 changed the webUI to http://[IP]:[PORT:8282] It loads fine now. But I cannot get the Home app to add the accessory through the QR and the number, I did not add any plugins just to make sure the bridge gets connected and working first. What am I doing wrong?
  8. Is there a way to find out if my motherboard can do that or not? From just short boot test I did without GPU installed, it didn't seem to want to boot into unraid. Is that indication that this motherboard won't allow me to do that?
  9. I know that most motherboards won't let you boot without a GPU and mine is seems to be that way since I wasn't able to get into unRAID webUI when I remove the GPU. But I thought this was more of motherboard restriction, not necessarily an unRaid restriction? I was under the impression that I am still able to passthrough the one and only GPU to VMs? Or is this incorrect? Also I forgot to mention in the post but I did test (though not extensively) the system and VM setup with a loaner GPU (I didn't want to buy one without knowing it is in fact the cause) and I was still having issue
  10. So I’ve been having some difficulty passing through my GPU to Windows VM. The system is X570 with Ryzen 3700X, no integrated GPU. RX580 is the only GPU in the system. The VM works perfectly fine with VNC and other remote desktop programs like Splashtop. But whenever I assign RX580 to the VM, it will not boot. I tried to passthrough the GPU as secondary with VNC still enabled and discovered it gets stuck in Windows logo screen with circling dots. While this is happening, one of the CPUs assigned to the VM gets to 100 percent usage and the log will shoot up and be filled. It goe