lnxd

Members
  • Posts

    156
  • Joined

  • Last visited

Everything posted by lnxd

  1. Most likely yup, looks like you also have a bios upgrade available so probably a good time to see whether that's worth updating as well when you're solving that problem. I'm hoping it's real, it could really go either way. But definitely interested in seeing those pics / finding out whether this kind of output is normal for this series. No worries at all! I have two builds ready for you to try: lnxd/phoenixminer:v1.0.0-20.45 lnxd/phoenixminer:v1.0.0-18.20 If you punch those into the Repository field on the container's edit page and start it up one at a time, let me know if either of those work for you. The first one is ideal if it does but I'm pretty sure the drivers failed to install, it's way too large to be valid. If neither of them do, I'll spin up a VM and build the first one on an older kernel and then push it to Docker Hub for you to try out. Edit: I forgot to update the base title on the second one, it'll report Ubuntu 20.04 but it's actually Ubuntu 18.04. Just ignore it, it won't affect anything.
  2. Permissions are correct and you don't have a rogue config file. lsmod on your system is only showing amdgpu has been loaded but not radeon, but the output of your lspci is strange. 0a:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Bonaire [FirePro W5100] [1002:6649] Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] FirePro W4300 [1002:0b0c] Kernel modules: radeon, amdgpu 0a:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tobago HDMI Audio [Radeon R7 360 / R9 360 OEM] [1002:aac0] Subsystem: Advanced Micro Devices, Inc. [AMD/ATI] Tobago HDMI Audio [Radeon R7 360 / R9 360 OEM] [1002:aac0] Do you have more details about this card? The subsystem device ID (0b0c) for the VGA controller matches a W4300 and the audio device (aac0) an R7/R9. Weirdly on AMD's website they list the W5100 as compatible with the amdgpu 20.20 pro drivers, but if you have a look at the card's driver support page it's supported by a different driver package (Radeon™ Pro Software for Enterprise on Ubuntu 20.04.1) I'll do a custom build for you with the enterprise drivers and see if that works, heads up to get it working you may need both the amdgpu and radeon modules loaded.
  3. That should work, did you restart afterwards? Please post the output of ls -lh /boot/config/modprobe.d/ And if the first diagnostics file wasn’t after you blacklisted the Radeon drivers and then rebooted, post another diagnostics file soon after you reboot. I’ll take a look at the output when I get home
  4. Hey @pronto, well found, you’ll need to use the amdgpu module for this container to work. How did you go once they were blacklisted?
  5. Glad to see you had success with that and I'm (even more) glad you referenced that comment, just keep in mind there's a less extreme approach with much less risk if you just want to delete the dangling images: docker image prune Will just delete the dangling images, and docker image prune -a Will delete all dangling and currently unused images. vs. $ docker system prune --all --volumes WARNING! This will remove: - all stopped containers - all networks not used by at least one container - all volumes not used by at least one container - all images without at least one container associated to them - all build cache You could even safely set docker image prune as a User Script to run daily/weekly if it's a really regular issue for you. #!/bin/bash # Clear all dangling images # https://docs.docker.com/engine/reference/commandline/image_prune/ echo "About to forcefully remove the following dangling images:" docker image ls dangling=true echo "" echo "(don't!) blame lnxd if something goes wrong" echo "" docker image prune -f # Uncomment to also delete unused images #docker image prune -f -a echo "Done!"
  6. No worries at all @BKS The issue you were having indicated that the WebUI was having trouble communicating with Docker so that's probably why it worked. I'm a Safari user through and through and none of those features were built with Safari compatibility in mind so I usually just stick to ssh 😂
  7. Did you restart the docker service?
  8. It looks like something is broken somewhere, but this is what I'd try. Make sure you understand each step before you do it. Run docker network ls via Unraid terminal to get a print out of all networks in case there's some the GUI can't see Make sure any network you want to keep is assigned to at least one running docker container by checking the web UI, otherwise you might lose it after the next step Run docker network prune via Unraid terminal to delete all unassigned networks Create the network manually using docker network create via Unraid terminal rather than through the GUI docker network create \ --driver=bridge \ --subnet=172.17.0.0/24 \ --gateway=172.17.0.1 \ br2 It should then (hopefully) be visible when you're setting up a docker container via the GUI. If not, you can always assign it to the container after it's created: docker network connect br2 unifi-controller Where unifi-controller is the container name you're using.
  9. I used to get it quite regularly, in my case it was usually a browser issue. Test in private browsing mode If that worked, clear your browser's cache for your Unraid server If that didn't work, jump into a terminal window and verify you can enter the container: docker exec -ti containername /bin/sh If only step 3 works, try a different browser, failing that you might have to restart the docker service and/or your server. Post back if you still have trouble.
  10. Nooooooo. Oh well, I don't think we're going to get much further, because that was pushing the limits from a compatibility standpoint. I'm going to experiment over the next few weeks with TeamRedMiner, I'll let you know if that opens up new possibilities. The only thing that's left for you to try, and it's a long shot because modprobe radeon should have done enough, is similar to what I said in an earlier post. Radeon Kernel Module Specific Per the user guide, you can open up a terminal window via the Unraid WebUI and run this before restarting to unblacklist the host's radeon drivers: touch /boot/config/modprobe.d/radeon.conf And, sometime in the future, if you need to be able to use vfio passthrough with your GPU; you can run this before restarting to undo it: rm /boot/config/modprobe.d/radeon.conf You could then try starting the container again once your server starts back up. Feel free to post your diagnostics zip and I'll take a look in the morning just in case as well. But as long as you didn't miss any steps, there's really nothing else I can think of to get older cards working with this container. If you could also please show me the output you get when you press save after running the container, I'll check that as well. Eg. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='PhoenixMiner-AMD' --net='bridge' --privileged=true -e TZ="Australia/Sydney" -e HOST_OS="Unraid" -e 'WALLET'='0xe037C6245910EbBbA720514835d31B385D76927f' -e 'POOL'='asia1.ethermine.org:4444' -e 'PASSWORD'='x' -e 'TT'='-75' -e 'TSTOP'='85' -e 'TSTART'='80' -e 'ADDITIONAL'='-amd -retrydelay 1 -gt 64,15 -ftime 55 -powlim 25 -clGreen 1 -mclock 1850,0' -p '5450:5450/tcp' --device='/dev/dri:/dev/dri' 'lnxd/phoenixminer' 1f4152332f574ec26dbc339f21d6ef27215f414438e4ad4c8eb732a613b7142b The command finished successfully!
  11. Thanks for trying that build for me mate. 5 hours into development I'm a bit of a zombie, but I got a build working in an Ubuntu 14.04 container with the correct drivers. If this one doesn't work I'm completely stumped. I'm just going to clean up my code and get a build on DockerHub, should be about another 20 mins, I'll tag you in a post. EDIT: @Lobsi all ready. Same process as usual, should just take an update, and then confirm it shows Ubuntu 14.04.02 in the logs. It's wayyy too many layers but that's why it took me more than 20 minutes, every time I started merging them I'd break something. Fingers crossed it sees your card! Yup, mine's sitting at 4318.41mb. It could be an error with my container but I'm getting the same hash rate as I was running it in a VM, I've never really noticed that it doesn't fill it up. EDIT: Sorry I'm half asleep 😂 that's around the current DAG size, so it's a good thing our memory is only half utilised otherwise we wouldn't be able to mine Eth for much longer.
  12. No problem at all! Spot on mate, that build I got you to try was still using 20.04. My development process lets me iron out any of the bugs until it all looks fine. The thing I can't try is seeing whether it can see the card, so that's when I pinged you last time. Sadly the 14.04 version didn't get that far 😂 We're just at that point again now too, if you could please force the container to update and try running it again with the same lnxd/phoenixminer:radeon tag, the logs should show Debian Bullseye as the base if you're successful. Please let me know if this build can see your card 🙃
  13. Thanks for trying for me. Well, the default container in theory could support your card via the mesa open source drivers if we were using a different host. What I just got you to test was something that forced an update to the newer open source drivers from a different PPA. The most recent version of the official radeon drivers for your GPU I could find (Crimson Edition 15.12) were made for Ubuntu 14.04, so I used that as a base, installed the drivers and then of course I ran into a compatibility issue with PhoenixMiner. It depends on libraries that just weren't available back then, and haven't been backported. As one final shot, I'm gonna see if I can get PhoenixMiner running on another base, probably something that is compatible with either Crimson Edition 15.12.
  14. Thanks @Lobsi, can I please get you to make sure you've completed steps 1 - 7 in OP (except 3 because Radeon TOP won't work for your card). Once you've done that jump into an Unraid WebUI terminal or SSH into your server and run: modprobe radeon && sleep 1 && chmod -R 777 /dev/dri Then edit the PhoenixMiner-AMD docker, and change Repository from lnxd/phoenixminer to lnxd/phoenixminer:radeon Then try running the container again and tell me if you get the same error. There's a real chance you will because this is just a preliminary test, but here's hoping I haven't missed anything 😅
  15. It'd be stored magically by Docker somewhere in /var/lib/docker, which is the mount point Unraid uses for docker.img. The next time the container gets updated it'll be deleted. Next time it happens, with Docker daemon started open a terminal window and run: docker ps --all --size It'll take a long time, but it'll spit out something like this (scroll all the way to the right in the following code block to see the sizes): CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE 9c50c41958a2 lnxd/xmrig "./init.sh" 2 hours ago Up 2 hours xmrig 3.06kB (virtual 15MB) 0a7f2f85a980 lnxd/phoenixstats "docker-php-entrypoi…" 4 hours ago Up 4 hours 0.0.0.0:5449->80/tcp PhoenixStats 1.67kB (virtual 415MB) 083266bebb23 lnxd/github-backup "./backup.sh" 7 hours ago Up 2 hours github-backup 202B (virtual 81.1MB) 8fb9e851dd0d lnxd/phoenixminer "./mine.sh" 7 hours ago Up 7 hours 0.0.0.0:5450->5450/tcp PhoenixMiner-AMD 6.48MB (virtual 1.61GB) You can then double check to see if you've configured the container correctly, and haven't missed a volume map.
  16. Hi @Lobsi, it was going to happen! Someone found one that's still worthwhile to mine with 😅 Let me do some research and see if I can get this working for you somehow. It's not supported in this container as it stands currently, you'll need a specific build that includes different drivers and you'll probably need to unblacklist radeon. But I'm sure it's possible. EDIT: @Lobsi In your signature, it says your "AMD Radeon R9 270X" is in Server 1 which is running "unRaid 6.2 RC2"! That's not going to work. Are you on a more up to date version now?
  17. Thanks man, I'm glad you like it all. Hopefully people get some use out of it and make some money. I'll play with the idea and see if I can come up with anything. I think you found a good option there. I'm sure there's an effective way to do it, yeah. I tinkered with it on my M1 MacBook Pro. When I plugged it in for the night it'd automatically mine, so I'd just leave it leaning on the glass window so it could maintain its temperature. They can make around $3AUD a month after electricity, so after 110 years of mining while I sleep (at current difficulty and XMR price) it could pay itself off 😂 This is xmrig mining with 6 threads from a container on my server just now with the side off. Could be worse temp wise but it's less profitable than my MacBook Pro:
  18. This is almost a theological question 😂 there's probably a more definitive answer somewhere but my understanding is that it depends on everything from the silicon lottery to the environment it's used in. A lot of miners will mine with them for a year or two and on-sell them when they're no longer profitable for mining to gamers who then use them the way they're meant to. Unless you're letting your card get to extreme temperatures you're probably more likely to have problems with the fans (being mechanical) than the actual GPU, and that's a replaceable (and reasonably cheap) part. Never, might not be valid but I see it like cars that are shared between Taxi drivers and basically run for 24h a day and rack up 500,000+ KM on the Odometer and never have an issue vs. people who drive their cars to and from work/the supermarket that are forever having to replace the battery. For cars it's the start / stopping that causes a lot of problems, for GPUs, I think it'd be large temperature fluctuations that'd do more harm than good. You can't mine with RAM, but you do need some RAM for CPU mining. It's probably profitable for you to mine Monero with your CPU, I containerised xmrig before I did PhoenixMiner but my Noctua NH-U12S chromax.black couldn't cope with the heat. If you think you can handle the heat, let me know and I can put a container on CA. PhoenixStats should be working now by the way, you just need to Check for Updates > Update Now. Just don't port forward it to the internet because I have to work out what's wrong in the .htaccess file.
  19. Mine are on the higher side, don't get too comfortable, but you're well within the safe operating ranges. Just keep an eye on your hard drives as well, those don't do well at higher temperatures. Uhhhhhh it should definitely look like the screenshot. I just noticed a last second change I made at like 1am last night broke it, a fix will be coming shortly 😅 Thanks for letting me know. Thanks! As you know, I didn't develop PhoenixMiner (I wish I did based on their dev wallets) or even do the ground work for PhoenixStats (it was built on top of someone's monitor from like 2 years ago), but a surprising amount of RnD goes in to even building on top of other people's work so I appreciate the thanks. I completely get the fun aspect of mining as well, that's half the reason I started. With those numbers (25-28MH/s) you should get around 0.04293eth - 0.04809eth per month at current difficulty ($68.72 - $76.97 USD) after PhoenixMiner (0.65%) and Ethermine's (1%) fees are taken into account (I get nothing). At the very least it will make otherwise static hardware profitable. That's exactly right. I tried a few and eventually settled with Ethermine. I had similarly good results with Nanopool. You're going to get pretty similar results no matter which (popular) pool you pick, just keep in mind that they all (except NiceHash, but that's a different concept) have minimum payouts, and there is a small fee for each payout. Ideally you'd find a good one and stick with it. Apart from your results over time (income) the main number to pay attention to when determining if you're with a suitable pool is the actual hash rate vs your reported hash rate. For me, the reported hash rate in PhoenixMiner has been pretty consistent with the actual hash rate visible from Ethermine. Right now, your reported hash rate to Ethermine is 27.9MH/s but your actual hash rate (what they're paying you for) is 31.8MH/s. You can see all this at your metrics link on Ethermine, including your calculated profit based on your mining performance over the last 24h.
  20. Very good call. My i5-10500 sits at around 60c-65c under avg. 50% load (my server is always under load because my whole family uses it). My M2 970 Evo Plus cache drive hovers between 45c-50c, but sometimes hits like 65c-70c under load. I've forgotten my pre-mining temps, but I can't safely put the tempered glass side back on my server while it's mining since I added in the 5500xt. My other drives usually sit between 35c-45c, but they're not in my PC case, they're in a separate enclosure. My UPS shows 555w power draw for most of the day, but that's with all my networking equipment connected to it as well. I gave up on fan noise, I'm used to my office having an idle hum to it now. At least it covers the sound of my Ironwolf Pro parity drive 😂
  21. It runs the fan depending on -tt (Target Temperature) and your GPU's fan curves. I have my fan speed set to a fixed value due to temperature fluctuations in the room it's in. The -tstop (Stop Temperature) is a cutoff temperature for the GPU., ie. you have yours set so that if the GPU reaches 70c, it will stop mining until it cools down to the -tstart (Resume Temperature). 70c is probably a little low for the Stop Temperature on an RX580 if you're on stock bios. They can handle up to around 100c for short bursts, but that's detrimental for your other hardware. I'd set the Stop Temperature to around 80c-85c, and Start Temperature to 75c-80c. Keep in mind it shouldn't be hitting these ranges unless you have an airflow problem.
  22. Perfect! It's on by default, so you should be fine unless you turned it off.
  23. Wow that looks beautiful, doesn't it? I'm so proud. Just kidding. I just updated the template a few seconds ago to make it more generic, you've left the default settings so it's trying to connect to my server IP which doesn't exist on your local network. You'll need to edit the docker container and add in your own server IP as the Miner Host which appears to be 192.168.1.103, you can also add in a Server Name eg. Unraid
  24. Wooh! Congrats. In the short term, you can see it in the docker logs. In the long term, probably PhoenixStats in CA.
  25. Overview: Support thread for lnxd/github-backup in CA. Application: https://github.com/abusesa/github-backup Docker Hub: https://hub.docker.com/r/lnxd/github-backup GitHub: https://github.com/lnxd/docker-github-backup This container contains a script, backup.py, for backing up GitHub repositories. The script requires a GitHub token and a destination directory. It then uses the token to populate the destination directory with clones of all the repositories the token can access. It is possible to set it to run on a schedule, and repeated runs only update the already existing backups and add new repositories, if any. Instructions: Installation can be completed via CA. All you need to do is grab a token from here and fill out the template. Feel free to comment here if you need any assistance.