• Content Count

  • Joined

  • Last visited

Everything posted by Mizerka

  1. thanks for work on this, I'm trying to get nvidia-smi to work in telegraf docker but just can't get it to run, it passes nvidia devices over as expected but just fails to run nvidia-smi altogether
  2. So I got nvidia device to pass over into container, but still can't find a way to run nvidia-smi, figured out that I can access it from /rootfs/usr/bin and it lists it using ls but fails to run with error file not found. /rootfs/usr/bin # ls -la /dev | grep nvidia crw-rw-rw- 1 root root 195, 254 May 8 13:05 nvidia-modeset crw-rw-rw- 1 root root 243, 0 May 8 13:05 nvidia-uvm crw-rw-rw- 1 root root 243, 1 May 8 13:05 nvidia-uvm-tools crw-rw-rw- 1 root root 195, 0 May 8 13:05 nvidia0 crw-rw-rw- 1 root root
  3. Hmmm not sure what you mean, so I already had netdata, but there's nothing in vars in template that'd make me think "I'll just copy this and it'll work" checked console, it doesn't have access to ipmitool or smi either.
  4. finally figured it out, god it's so stupid. at boot, after fix (40%), 30% test, 55% test, 100%. at 85% because I don't like fan1 going out of spec, might move them to fana and b for direct control. anyway, the fix. create a dedicated user for unraid tool to use, same for grafana. I've been using just the default admin account for everything, until I opened new tab by accident and saw that message "your session timed out, log in again". also nothing in bios for fan/thermal/power control so probably stuck with fans1-6 and fan a-b
  5. Okay, enough edits, so I'm still playing with this, I went back into ipmi webfront and just switched fan mode from full speed to standard, and it dropped fans to their comfortable idles of around 700rpm, okay... played with min and max more but no matter what I changed it wouldn't do anything (maybe the issue?) so I grabbed prime95 docker and threw 60% load on it, to see fans ramping up, slowly, very slowly, it's been like 5mins and they're still climbing, despite logs stating they've been set at 95-97% instantly. 2020-05-07 23:52:07 Fan:Temp, FAN1234(27%):CPU1 Temp(37°
  6. It allows you to edit the values already set in the bmc. Instead of commands, it's a print out of the config. You edit them and then the whole config is uploaded back to the bmc. cool, just noticed load on boot slider as well. still no luck at controlling the fans though, I must be doing something wrong, here's what I've set currently; Which I'm assuming will set fans 1 2 3 and 4 based on cpu1 sensor (30c ish atm) report/alert below 20c and above 65c, pwm (or I guess ipmi hex control for supermicro?) will force it at 20% and let it rise until 30.1%
  7. Okay, that makes sense, I'll play with values a bit, right now they're bouncing between 1400 and 2000k for 1600rpm rated isn't ideal, there could be some polling issue here. I'll have a look at the config editor as well, how would that interact with values already set manually would it overwrite it or only apply during fan control functions?
  8. Thanks I'll have a look but can't remember seeing an option like that there. also playing around with fan control more, it just sets bmc to full speed instead of controlling it, hmmm, to be fixed another day
  9. maybe I missed it then, it only allows for fan control, but not modifying treshhold that ipmi actually uses, in supermicro case if a fan falls below critical error it ramps it to 100% (like mentioned in op), so I had to get ipmitool and manually change it through console rather than addin I'm talking about fan thresholds that ipmi will use to report fan speed errors etc, pretty sure fan control panel only for speed modulation based on sensor reading. Also in my case I only see this; unlike the nice fan by fan, I get fan1234 which are 4 pwm headers, which is wei
  10. Hey, Sorry not amazing with linux yet, so I'm building additional grafana dashboards, I'm using typical telegraf into influxdb with grafana display. I got everything else sorted but wanted to add ipmi stats and nvidia smi. I found thread on ipmitool so added /bin/sh -c 'apk update && apk add ipmitool && telegraf' to post arguments which installs ipmitool within containers /usr/bin as expected but can't get nvidia-smi to work properly. So I'm thinking it might just be easier to give container access to sytems path directly, but not sure how to accomplish
  11. hmmm so ended up resetting bmc and changing ip addressing and looks like it picked it up afterwards, ipmisensors reported connection timeout, so probably was networking/arp issue. anyway, up and running again. btw, being able to modify fan thresh from tool would be nice my x9 board really didn't like my noctua's getting down to 300rpm and assumed my case fans are fine up to 18k rpm. just supermicro things. Also am I right in thinking fan control only control fans 1-4 and fan A? or does it just specify naming and actually does all numbered and then all lettered?
  12. Thanks I'll give that a try, I've now removed any other addons or packages that might interfere with ipmi including the nerdpack pkg that I've used ipmitool with. Did try to recreate the connection few times, previously on bad pass it'd just throw a conn refused in logs but this time I got nothing and the fact that ipmitool worked on it's work (without configuring it) would've made me believe it created the connection fine. failing that, I'll give it a good ol turn it on and back off again.
  13. Hey, thanks for your work on this, it looks like I managed to break something after a reboot and it no longer sees network ipmi, it doesn't report any issues and using ipmitool from nerdpack in console reports the sensors correctly. I can only see the hdd's and hdd temp reported from unraid. Gave it another reboot but still didn't do anything.
  14. haha, yeah it was that, sorry for assuming, and yes, I remember Easter Mondays of keeping close eye on my Father as he might just walk into living room with a bucket of cold water and toss it over you. Stay safe, and thanks again.
  15. wow, I feel a bit dumb, ye that worked, disable tls/ssl from settings and after reloading it accepts both now as expected, I think I must've enabled this when setting up encrypted xfs drives. dziekuje
  16. Hey, So it's been an issue for a long time but I've dealt with it using jumpboxes etc, but lately I've been accessing it over vpn a lot more and don't fancy leaving pc on all day just to act as a jump box for unraid access. So, right now everytime I try and access my unraid web portal from any device on network, I am forced and redirected to unraid.local fqdn (http, https is disabled atm afaik), not a major issue, pihole deals with .local as expected and queries local tables for it and resolves. Issue then becomes that over vpn tun (ovpn) can access any of
  17. Hmm, scrap that, so I played around with it more. I Ruled out local and networking, all of which looked as expected. The issue is isolated to the vpn tunnel, I say that because I've also tried another brand new container, same results, brand new qbit container, same results. is what traffic looks like, with spikes being when I briefly turned vpn off for testing, where you can clearly see a spike to expected 13-15mib/s So, playing around with ovpn files, looks like it's not liking tcp, after changing nordvpn connection profile to udp, it instantly kicked bac
  18. Hey, thanks for the work on container; lately I seem to be really struggling with down speeds, can't seem to get anything more than 3MB/s down and 1MB/s up, must've been happening for around a week or two (I auto update containers so can't tell exactly). Previously I'd easily saturate the wan link (130mbps and 40mpbs up). There weren't any changes or anything that'd affect this? I did upgrade to 3.8 around the same time as well if that changes anything. Using nordvpn UK p2p tcp vpn (same server and ovpn file, but tried others as well). Thanks
  19. I love unraid for the true jbod experience, having moved from freenas, the ui and features by well worth the pricetag. I'd love to see proper ssd, m2 and nvme integration, including full flash arrays, or at least improve compatibility of them. Implementing a number of community addons and plugins would be nice as well.
  20. Thanks for flagging this, wasn't aware of it.
  21. oh, you're right, i missed that; nobody 13716 8.7 92.7 92190640 91863428 ? Sl 06:34 66:18 | | \_ /usr/bin/python -u /app/bazarr/bazarr/main.py --no-update --config /config okay, killing it for now then, I guess it's some memory leak, never seen it use that much/ Thanks
  22. Guess who's back, back again Array's dead, dead again. I've isolated one of the cores after forced reboot, so now at least webgui is usable (I guess isolation to everything but unraid os? okay), despite every other core sitting at 100%. Dockers are mostly dead due to lack of cpu time, but sometimes respond back with a webpage or output. shares are working almost normally as well. nothing useful in logs again. After removing plugins one by one, array returned to normal after killing ipmi or temparature sensor plugins. so that's interesting that it'd bric
  23. sure, well I give up then, good luck. only other thing in terms of config is you have disk shares force enabled, you're better of using user shares or leaving it on auto default. and mounting disk outside of array if that's what you need.
  24. those dns servers are a bit weird, first is likely your router, but other 2 are public and weird, I'd change to local router only probably, this is given out by your dhcp, i.e. router, again, strange. one of them points to some random location in Romania. and yeah ipv6 enabled so it picked up fe80:: should disable dhcp for something like unraid, it'll just cause you issues one day. Comparing my config to yours, there's nothing wrong unraid side and it doesn't report issues either. Make sure you have filesharing and discovery completely
  25. that looks retro anyways, tried applying those changes? since you can ping it and it echo's back then that's good enough, from here on, it'll be a layer 4 issue onwards, i'd still wager it's a microsoft service issue, got any other machines on network that can access this share btw? also looking at logs briefly, you have nameserver set to, typo or is that some strange public resolver you use? doesn't actually respond on 53 by the looks.