lnxd

Members
  • Posts

    156
  • Joined

  • Last visited

Everything posted by lnxd

  1. No worries. Possibly, I wouldn't passthrough one without the other, because the driver might address the audio device expecting it to be there. You can passthrough multiple audio devices if you need to have onboard audio in the VM. While you're in there by the way, I'd dump it with both so that you have a second vBIOS to try if something goes wrong with the first. It's a pain set up, and ATIFlash is more trusted than GPU-Z for generating a mod-able (complete) vBIOS, where GPU-Z is the go-to for producing vBIOS for vfio passthrough.
  2. Can you please stop xmrig and post the output of: free -m And also show me a screenshot of your RAM usage from the dashboard: Also, please start the container again and post the output of this if there's nothing in there you don't want to share on the forum: top -n 1 -o %MEM What you just posted shows that the container can see 1 NUMA node, so the RAM requirement for 1GB Huge Pages on your host would be 3GB * 1 = 3GB, and you don't have enough RAM available to the container when it is launching. So there's something else that's unexpectedly being allocated RAM and preventing XMrig from being able to access it, I'm guessing it's QEMU but I'll await your response ๐Ÿ™‚ If you think XMRig is using more RAM than it should, you can add eg. --memory=4g to Advanced > Extra Parameters on the Edit screen to limit it to eg. 4gb like so: Probably not, it's much more profitable to mine Ethereum, it's kind of a waste GPU mining Monero. For GPU, check out PhoenixMiner in my signature, it works with both AMD and Nvidia cards.
  3. Yup that's right. Can you please post your diagnostics.zip? @giganode I probably confused you by quoting them both in one post, did you mean to tag fritzgeralds or do you think trig229 is having a vBIOS issue as well?
  4. Near the top of the log it should show a line that looks like this: * MEMORY 12.1/31.1 GB (38%) What does yours say? The container likely reports that you have a higher RAM usage than you actually do. I'm currently researching a similar issue for @SPOautos. If you don't mind posting your diagnostics.zip it might help me find the cause a bit quicker ๐Ÿ™ƒ
  5. I'm not really sure I understand your question. But in addition to what @Hoopster said, if you're just talking about accessing Shares over a network via Samba; yes it's possible to Export them or not (make them indexable vs only accessible by typing the path), set Security to only give specific users access (protecting the share with a password) and both. Here's the documentation, and here's a screenshot of the relative settings (accessible via Unraid WebUI > Shares > {Share Name}) in case this is what you're talking about:
  6. Normally an extracted vBIOS is the size of the GPU's flash ROM, if you use a tool like ATIFlash to extract it. Spaceinvader One's script is really, really awesome in that instead of making an image of the whole ROM, it just reads the data accessible to the VM host directly into a ROM file. It's much more efficient but it has some caveats, one of which is that if there's something that's not accessible to the VM host, ie. not in the expected place in the card's ROM, it won't be extracted. Your vBIOS file metadata looks valid: Product Name is : 73BFHB.20.1.0.42.AS02P Device ID is : 73BF Bios Version : 020.001.000.042.000000 Bios P/N is : 115-D412BS0-101 Bios SSID : 04F2 Bios SVID : 1043 Bios Date is : 11/04/20 02:27 But I start work in 5 minutes so I'll have to take a proper look later. EDIT: @fritzgeralds This vBIOS file isn't complete. I don't know for sure it's the cause of your problems but it's missing several values that would be in a valid vBIOS file. I'd say it's possibly your problem. If you have a Windows VM the easiest way to get a valid vBIOS is to: Make sure the latest AMD drivers are installed (you can do this step with the GPU as primary). Set up your GPU in the secondary slot (don't miss the little green + button on the left), make sure the sound card is also passed through. Use either ATIFlash or GPU-Z to save the ROM to a file. Screenshot for step 2: While it's not what you're asking about, please note that I still wouldn't flash this vBIOS to the card. Although it'd most likely be fine, VFIO might cause problems of its own for the generated ROM. Although it's perfectly fine to flash the card over VFIO. If you needed a flash-able copy some time down the line, the safest way to get a safe-to-flash version is probably from running Windows (or linux, but I don't think there's a GUI version) on metal with the GPU as secondary. This is the sort of behaviour you'd see with the vendor-reset issue, but your card shouldn't be impacted. Can you please post the output of: dmesg | grep amdgpu Run on the host. Preferably within a minute of starting the VM, and then again within a minute of stopping the VM.
  7. ๐Ÿ˜‚ tuning is definitely key.
  8. Oh my god.. Not only is this container amazing, I got it to work! โ˜บ๏ธ For anyone interested, it's not too difficult. I had to set some permissions, clone the source and build the vnc-version but it works.
  9. Where previously it was possible to use link, I believe the expected way to do this now is to have the containers on the same docker network. You'll have fun trying to get them to talk on something like a macvlan network, so I'd stick to Bridge to keep things simple. So, sssuming your primary Bridge network is called br0 they would both be on br0. Then, as docker dynamically assigns IPs to the containers when they start, and these local IPs don't always persist, the best thing to do is refer to the containers using the container name as a host name. Assuming you're trying to get Radarr to route via a binhex-delugevpn's Privoxy: Make sure both containers are on the same Bridge network so that they can see each other. Find the container name (Assuming privoxy but it's probably binhex-delugevpn) and port (Assuming 9118 for Socks5) Visit Radarr's WebUI > Settings > General > check Use Proxy, and enter the details from step 2: You'll want to use the same configuration for the Downloader, etc. as well. There is multiple ways to do this, but this is (I believe) the developers' intended way for users to set up this kind of network. Once you have it set up that way, the GUI should be visible while Radarr routes it's traffic via Privoxy. It's a similar setup for the other containers mentioned as well. Unless you're trying to use Privoxy as a reverse-proxy and I'm missing something? In case anyone is interested, here's the page in the docs that talks about using --network container:name
  10. This is really interesting. So with this User Script, assuming it's run on each Array Start, it modifies the WebUI and is able to effectively hibernate the VM to /mnt/user/domains/save and restore from the same location. In other words, a VM can be paused, the server rebooted, after which the VM can still be resumed. Am I understanding correctly?
  11. No worries, and thank you!! โ˜บ๏ธ
  12. Perfect! Ah I just noticed you said 4x Intelยฎ Xeonยฎ CPU E7- 8870, I was wondering how you were getting above 9,000! I'd let it mine for a while and see how it averages out, it might get a little higher but it's also responsible for looking after everything else on your server so it's possibly doing quite well. As you pinned the CPU the way I suggested in Step 6 that could explain why you're getting a little lower than you would on all cores and threads. Being a server I'd say reliability over hash rate any day. Looks like you agree with all that ECC RAM โ˜บ๏ธ All good, what you're seeing is expected. It means XMRig is set up correctly.
  13. No worries Phil! You shouldn't need to restart the server to see the affect of the additional argument, just the docker container but that will happen automatically when you Apply it. If you ran the script, technically you don't even have to restart the container. Normally XMRig would update the MSR directly, but because it's run from a container, even with msr-tools installed in the container, and /dev/CPU passed through and chmod -R 777'ed I can't get it working. But the script updates the value directly from the host, and XMRig can recognise the changes on the fly so there's no need for it. You'll need to jump on the latest-root tag in order to get rid of that error, I'm not sure what causes it specifically. To get on the latest-root build: EDIT: Sorry I posted too soon. @horphi if you don't mind posting your diagnostics.zip, that might help me solve the problem for other users without needing to run XMRig as root โ˜บ๏ธ
  14. ๐Ÿ˜… That's not the only way although it would be an easier path. I think what @jonp means is that until recently, AMD didn't really design their GPUs with VFIO passthrough in mind, which causes quite a lot of issues for what you're trying to do. AMD have slightly improved with their recent GPUs, but NVIDIA has started officially supporting it. If you want to get the GPUs you have working, you can check out Gnif's vendor-reset project, which lists the RX 590 as compatible. @ich777 made it extremely easy to implement this into the kernel using his Unraid Kernel Builder container. I believe I read somewhere that Limetech recommends against replacing the kernel with a custom build now, and there is possibly implications with getting support when you're running on a custom kernel. That said, it's been working perfectly for me since before the update to 6.9.1, and it's very easy to go back to the stock kernel as long as you back it up first. I'm starting to repeat myself on different threads, so here is my answer on an earlier post: EDIT: Also, just to add, if you don't have any luck and you're interested in Mining, you'll get an insane hash rate with those RX 590s on PhoenixMiner.
  15. No worries at all @fritzgeralds His script produced a valid vBios for one of my GPUs but not the other so I'm very interested to see how yours looks. Your symptoms could possibly be explained by an incomplete / invalid vBios. That said, I just noticed Christoph tagged giganode in an earlier post so looks like some of the real pros might be taking a look for you fairly soon as well ๐Ÿ˜…
  16. I'll trade you a 5500xt for that 6800xt if you're keen. On a serious note, can you please post your vBios here? Did you dump it via a Windows VM with the GPU as secondary, or some other way?
  17. Haha all good, you guys made me think. I'd never actually tried using the data before, I just knew the theory was sound. Perfect! No worries. I'm also pleased I didn't make any mistakes in my commands ๐Ÿ˜…
  18. Something like this, assuming the following: Backup Path: /mnt/user/backups/github Working Directory (Share): /mnt/user/projects Github Username: lnxd Repo Name: docker-alpine Current Branch: master SSH into your Unraid server or open a terminal from the WebUI and run: # Make the project directory in your working path # Note that if you do not change this to a valid path it will make a share on your server called projects mkdir -p /mnt/user/projects/docker-alpine # Copy the backed up data to the hidden .git directory inside the project directory cp -r /mnt/user/backup/github/lnxd/docker-alpine /mnt/user/projects/docker-alpine/.git # Change your present working directory to the project directory you created cd /mnt/user/projects/docker-alpine # Fetch the contents of .git git fetch # (Optional) Check the name of the branch, usually either called master or main # Press q when done as it starts an interactive session git branch # Checkout the branch you want git --work-tree=/mnt/user/projects/docker-alpine checkout master
  19. Hey there, Sorry for the delay, I missed your question. The script should be run on your Unraid server. (random) People (on the internet) are reporting around 1900H/s with your CPU for mining with RandomX. You can also try adding --asm intel to the Additional XMRig Arguments field like so:
  20. Awesome to hear! All good, this is how it should look. This is similar to how your data is stored on Github's servers, and contains the entire history of your repos. You need to use the git fetch and then git checkout commands if you want to access a particular version of the data from the folders you can see in the screenshots. If you're used to GUI, it's also possible (although arguably less efficient) to do it using GitKraken and GitHub Desktop for example. In addition to that, yes. If you clone a git repository without specifying a depth it will backup the same data which you can see in the .git folder. However what you're probably used to is that it also checks out a specific branch (by default at the latest commit) simultaneously. The point of this container is just so that you have a local backup of all of the repositories you have on GitHub. Although you could possibly use it with rsync + Gitea/GitHub Enterprise/GitLab CE to host your own local mirror of your GitHub account, that's well outside of what I can support here ๐Ÿ˜…
  21. Hopefully not ๐Ÿ˜… If you want to find out for sure, I just added a step 8 to OP that optimises the values if you want to try it. Depending on feedback here, if people get higher hash rates from it, I might turn it into a plug-in.
  22. Thanks for letting me know, but just to reiterate everyone will see that error on an Unraid host unless I disable the MSR MOD ๐Ÿ˜… Also, no need to use Privileged unless you're getting unexpected errors. If you want to use 1gb pages you just need to add --randomx-1gb-pages to the Additional XMRig Arguments section. Just make sure you know what to expect if you do enable it by reading this first. EDIT: I need another couple of testers for the MSR MOD to see if it impacts hash rate enough, if anyone is interested please PM me. It's not entirely risk free, but the written MSR values are reset by a reboot.
  23. Can do, I tried to think of everything but that never works ๐Ÿ˜… so I appreciate the feedback!
  24. Hey @Partition Pixel yup, not being able to use the MSR mod could be holding it back. My container supports it on hosts that have it enabled, but Unraid requires a bit of work to get it going. Iโ€™ll pm you when I get home and give you some steps to try, if it works for you Iโ€™ll merge it into the readme and OP.
  25. Thanks for helping out while I was asleep @ich777! It's not an option with PhoenixMiner, but all good. This was the first container I published, I really just published them all to help out the community. With XMRig I just noticed that'd be possible while I was browsing the source code trying to make no-fee optional. Also, honestly @ich777 understands how to pass through an Nvidia GPU to Docker far better than I do; I didn't even know my container would work with Nvidia until he told me and tested it. Nice hash rate @frodr, I'm going to update the template and guide as soon as I get a chance so that other people don't get stuck. I don't have an Nvidia GPU for testing