FlamongOle

Community Developer
  • Posts

    548
  • Joined

  • Last visited

Everything posted by FlamongOle

  1. Any possibilities to configure the Dashboard page? All my fans are controlled by IPMI, and under "Airflow" 1 fan get listed running 0 RPM 😛 Minor thing, but would be nice to remove sections not used/giving proper info
  2. Got affected by this when I swapped ISP, have been scratching my head some days because of this. Thanks. I believe next Unraid version will fix this whenever it launches.
  3. Thanks! You'll get that when you click to view the disk array in the big view. Else I don't think unraid's numbering make a lot of sense, hence why I made this plugin to begin with. Maybe in the future if I have time and motivation, but won't promise anything
  4. Update 2021.09.17 Commit #172 - BUG: Javascript MIME type fixed (probably) for the locate script, some might have experienced the problem with Firefox. Commit #171 - SECURITY: Boring security update, fixed some SQL and javascript injection security issues.
  5. You can already change the order, however, not after it is created by a simple click. But it should be fairly simple to just move over devices after adjusting the layouts. I might make it simpler in the future, but nothing I prioritize. If you want to move ex. layout 3 to 1, then copy the layout of the largest one, move the disks, then reduce the other one (if the sizes are different). If they're all the same, it's just a matter of reassigning the devices. Font colors should follow the unraid theme, not messing around with that (at least for now). Choose a better background color
  6. This was the main intention from the start, but I can't find a proper good solution to this yet. "by-path" is fine for most cases (as far as I can see), but it seem to have problems addressing SSD's. I have one SATA SSD and two nVME SSD's it does not add into the path section, and I don't want to mix two different systems for organizing drives in Disk Location. Not sure if this is a kernel issue, never meant to be, or if there's another solution for this altogether. Command below will show devices with paths defined. Even if you would see your SSD's there, it will break many systems and can't be implemented now. ls -l /dev/disk/by-path/ | egrep -v "part|usb"
  7. Update 2021.08.17 FEATURE: Added custom start number offset for trays other than 0 and 1. Also cleaned up the disk information tables a bit.
  8. Update 2021.08.16 BUG: Removed "prettyname" variable as it wasn't allowed with special characters. Might fix issues where some people didn't see "Disk Location" in the menus. I have been busy for quite a while, but this update, thanks to @Squid pointing out the flaw, is probably fixed. Please go ahead and try it out. Thanks!
  9. I haven't tried and likely won't. I'll leave the docker and the page up, but I will end the "support" now due to decreasing popularity in mining and the dockerhub won't build for free automatic anymore.
  10. It sort of says it.. either lower mem clock, or increase power and/or fan speeds and see if that helps.. GPU's are individual, try different values..
  11. Check first page, first post 😛
  12. Don't use that version on the host (if you want to OC), haven't tried 470. The container handles the rest and are setup with default nvidia driver which will likely be replaced regardless - hence the missmatch. Nothing to worry about what's happening inside the container.
  13. Front page updated with Nvidia driver version and the clock table with additional information below it.
  14. If it requires additional start up flag, then probably not. If it just requires open ports/port mapping, you can set that up manually yourself in the template setup and check if you can connect to it. I won't be using more time with this docker container unless it stops working.
  15. Ye, had the same experience here. Not so much I can do about it. Maybe there's some conflict with the 465 drivers between docker and unraid even if it's built from the same source, or just a bug. Won't use time to investigate this as it's probably at the near end of the ethereum mining era. Stable 460 drivers work, so I'll write a note about it at the front page. Thanks for the info trhough.
  16. It should auto detect and install correct drivers. Maybe the location it download drivers from are down for maintenance or something. I can't check it now, but it worked for me yesterday.
  17. Nice, decided to fiddle around with your numbers.. and should have done a bit more fiddling earlier.. ended up with these results on the RTX 3070: Power limit: 120W (seems to be the minimum allowed for my card) Clock offset: -550 Memory offset: 2300 Fan: auto Hashrate: 60.2 MH/s - so apparently very effective!
  18. I have a RTX 3070 Gigabyte Gaming OC 8GB, but it sits inside a relatively hot server, so I expect some hashes might be lost just there. But it's important to compare with the same tools, maybe nicehash calculates hasrates differently as well. A fair comparison would be using nsfminer for windows and the one in linux/docker container, and then use nvidia-smi on both windows and linux to determine power usage (maybe also nvidia-smi shows the same clocks as well across platforms?).
  19. This might be related and known: https://www.reddit.com/r/EtherMining/comments/7lfbe0/windows_vs_linux_which_is_more_power_efficient/ What I noticed with my RTX 3080, is that I get way more stales in Windows than in Linux. Windows is about 6-10% stales (always some stales), in Linux 0-1% (mostly no stales). There might be more things into these things which might be hard to answer.
  20. And also, do you use the same tool to check the actual power usage, nvidia-smi? Some 3rd party tools might show different numbers, I dunno.. just a thought.
  21. I have no idea, different drivers perhaps (windows and Linux drivers might behave differently for all i know). I haven't tried it myself, but ensure that the gpu and memory clock is exactly the same (the input is different from windows and Linux drivers)
  22. Might work fine with one GPU, i didn't get any luck by trying to pass through more than one to a VM. Dunno why, but docker worked for me 😛