nimaim

Members
  • Posts

    28
  • Joined

  • Last visited

Everything posted by nimaim

  1. That would be ideal. Here is @olehj's script from the nsfminerOC container: https://github.com/olehj/docker-nsfminerOC/blob/master/worker.sh, perhaps someone more experienced than I can do something similar in this container. The major bug there is it only works with nvidia's v460.73.01 driver on the host (again, now unsupported), which is something I've not been able to figure out.
  2. Sorry for the late reply, I don't check this forum much. This docker container does not allow setting any OC. You have to use something that will invoke nvidia-smi / nvidia-settings and still work with the host. Currently the only thing that works is the nsfminerOC container (no longer supported btw) and the 460.73.01 driver on the host (also no longer supported, you'd have to somehow manually install this since the nvidia-driver plugin does not support this old version anymore). I was lucky to have had it installed already prior to updating the plugin so I'm still on it. To answer your questions now, if you get all this set up, yes you'd start it once per GPU with settings for each and then stop it. Once you get one container set up, you can just copy that container to a second one to use as a template, just change the GPU ID and settings for that GPU (you can get the GPU ID using the "nvidia-smi" command in Unraid terminal). No I do not automate it; it does not take much time to manually start the 2 nsfminerOC containers, stop them, and then start up this trex one, especially when my server is up for weeks/months at a time. I posted a screenshot of my 2x60tis mining in Unraid through this container for about a week, no issues as you can see. My 60ti FE is my main card inside the case hence the higher temps with a Aorus 60ti connected externally. My OC settings are -500 core, +2400 memory, 120W (50%) power limit, fans at 60%. YMMV.
  3. This is probably a unique request and I know it'd be impossible to guarantee compatibility with newer versions, but I figure it's worth asking anyway. Would it be possible to add a way to manually install a specific driver version? For example, currently there is an nsfminerOC container that only works with nvidia driver 460.73.01 on the Unraid host (which is the only container I know that properly tweaks GPU OC/fan settings) but obviously that is no longer available in this plugin as it's quite old. I rather not spin up an entire VM just to passthrough the card and invoke nvidia-smi, so any other suggestions? I guess manually installing an older version of this plugin would also work, but not sure that is possible.
  4. Understandable, thank you for your work on this! It was the only thing that worked for me to OC + set fan speed in a docker container. I'll migrate over to passing the cards through in a VM.
  5. v460.73.01 is no longer available in the nvidia-driver plugin. Any other versions on the Unraid host confirmed working with this?
  6. This may have been mentioned before, but I'll state it here: First set all your OCing parameters with the nsfminerOC container (1 per GPU), start it up, then stop it, and then use this trex container to mine with the previously applied settings. It invokes nvidia-smi to do the OC and fan control so it's applied even after you stop the container. The only caveat is you need the 460.73.01 Nvidia driver in Unraid or else nsf doesn't work, at least last I checked. Works like a charm for me though: I'm getting ~120 MH/s with 2x3060tis @ 120W each with the OC applied from nsfminerOC.
  7. I have linuxserver's letsencrypt (now SWAG) container working just fine but would like to switch over to this as it makes adding entries so much easier through the UI. I also followed Spaceinvaderone's video of setting up each container that needs to be proxied via a custom proxynet network interface. Is this still necessary? Any other considerations for migrating over? Anything like fail2ban in here?
  8. I must have missed it but was removing the Log column under Docker intentional? Now I have to click on the individual docker container to bring up the menu in order to view log. Still works but just annoying.
  9. After finally getting the container set up with the fine help of you folks in here, I was able to successfully view the feed off a cheapo Wyze cam (official RTSP firmware). The only problem I'm seeing is the error window is being spammed with a bunch of "FFMPEG STDERRa few seconds ago [aac @ 0x14c7b3832a40] Queue input is backward in time" messages about every second or two. I used the settings here (except changed vid streaming to HLS, Poseidon had the same issue; this seems to be an AAC issue though): Any ideas? EDIT: No errors with "No Audio" which is what I'll set it to for now. But am curious why those errors are coming up.
  10. Ok I wasn't sure ... if this is the default, is it safe to disable disk shares then?
  11. Yes I understand the difference between disk and user shares, but should the "cache" disk itself be shared? Prior to 6.9.0, it was not so I'm just wondering if this is a "feature", bug, or something else.
  12. I recently upgraded to 6.9.0-beta25 on my test server and "cache" now appears under disk shares. Is this normal? I am pretty certain I never added any disk shares. Can I safely remove it? Also how do I permanently disable disk shares? I never intend to share the entire disk on my server whatsoever. EDIT: Found the option to disable in "global share settings" ... but my question still remains for "cache".
  13. Re: Workaround for above ... I could not get this working with Deluge so I got rid of the container entirely, got LSIO's Transmission container instead (all my other media related containers are from them), routed all the traffic through binhex-privoxyvpn, and all seems to be good for now. Not sure what the cause was, but this is a cleaner setup as I didn't like AIO containers anyway. Now if binhex-privoxyvpn is not up (i.e. VPN and privoxy), the containers using it (nzbget/transmission/etc.) will fail to start. Not sure if the bug is on the deluge side or radarr side because this did work with radarr v2. I guess some things are expected to be broken in "preview" (beta). Just posting for others facing a similar problem: I'd say try a different application/container instead of wasting hours playing with settings as I did.
  14. Apologize if this has been brought up before, but since upgrading to the :preview tag of radarr LSIO container, radarr is not "seeing" my deluge torrent downloads, and the completed downloads are stuck in the completed folder. This seemed to be working fine in radarr v2 so I think it's related. I do not have the same problem with nzbget. I am using the binhex-delugevpn container and have a mapping from /downloads (I changed this from the default /data as I thought that was the issue) -> /mnt/user/downloads. In radarr, the same mapping exists. I have the deluge client set up in "Download Clients" that has a category of radarr. Deluge properly starts the download and then moves the completed download to /downloads/completed in the container and finally to the category folder (Label plugin in deluge), to /downloads/completed/radarr. Nzbget works the same way moving category "radarr" downloads to /downloads/completed/radarr but while these get picked up by radarr, the completed torrent directories do not. Any ideas? I can't seem to get this sorted out. FWIW, even when removing the extra move to /radarr (by ignoring the category/label in deluge) and leaving the completed downloads in /downloads/completed, radarr does not seem to pick them up and I have to manually move, rename, and import them. I thought about switching torrent clients altogether but since all the VPN based containers are all from binhex and behave similarly, I realized I should try to find the root cause of the issue first.
  15. Thanks! Looks like the H110 (9211-4i) + RES2SV240 SAS expander is a good solution to get ~20 disks working off a single PCie x4 port. Limited bandwidth of course but more than enough for mechanical drives. Highly recommend this over the Marvell garbage.
  16. Wow I got screwed on two cards on Ebay and finally got a RES2SV240 that works. First seller sent the completely wrong card, second one I went with a cheaper IBM SAS expander I was going to power off a riser card and that ended up being a fake. Did a test run with Molex power on this third one and works well, other than it getting insanely hot in just 15-20 minutes of starting the parity sync. @Michael_P I'm guessing for the fan, you took the heatsink off, put a thermal pad and then the fan?
  17. ^ Thanks, I'll give it a shot! The card / drives seemed stable enough when I had VT-D disabled.
  18. No problem. I guess my case is special as the card I mentioned uses 2x 9215s bridged with a ASM1806 (which is probably why everything I've tried has had no effect). Most cards I see on here use a single Marvell chip where that kernel boot parameter seems to work. But my point remains: if you want good compatibility and reliability, just stick to LSI. I learned this the hard way. There is an awesome list of all of them (including OEMs) here: https://forums.servethehome.com/index.php?threads/lsi-raid-controller-and-hba-complete-listing-plus-oem-models.599/. You can get some of them dirt cheap on Ebay.
  19. @johnnie.black Side question on the same topic, while I'm waiting for my expander to come in, can I still set up my disks with this Syba card and VT-D disabled? Or not recommended as the SAS card will reassign these drives? It'd be nice to have everything ready to go once it comes in since parity drive initialization takes a full day.
  20. What size Noctua fan do you have mounted on the chip?
  21. See my thread here: I just tried a card (which uses this same chip) and can confirm it does NOT work, even with the iommu=pt parameter. Definitely go with an LSI card (+ expander if necessary). It is not worth the effort to get this broken chip working. It works fine in Windows but the Linux kernel just does not like it w/ IOMMU / VT-D enabled.
  22. @Michael_P Great minds think alike, that was exactly my idea. Pretty sure I will never use the PCI slot. I'm a 3d printer hobbyist so going to design a PCI bracketed "tray" that I can screw in and zip tie this board around. @johnnie.black if only i had that much room in my chassis, lol. It's a miracle any small consumer case (microATX at that) fits like 12 drives as it is. The Node 804 is dual chamber so maybe can mount that card on the HDD cage side, but it's pushing it.
  23. Thanks @Michael_P. Never even heard of this card so appreciate your recommendation. Got one on the way. I'll have to find some nifty way of mounting this thing in my Node 804. Definitely a cleaner solution than the PCI-E riser/extender option. What a PITA just to get a bunch of drives working.
  24. Interesting... that is definitely a cleaner solution. Have you used this? Any idea if that will work with a 9211 4i? Is the PCI-E port on it just used for power or do you need another spare PCI-E port on ur mobo for this? I assume molex is the alternate way to power this (or is it in addition to?)
  25. Yea I've looked all over ... this seems to be accurate. There is one very roundabout way to do it with SAS cards though, which is to pair a LSI 9211-4i variant card (the only x4 SAS card out) with a SAS expander powered externally with a riser card (PITA but may be my only solution at this point without tearing apart my current set up). Figured it was worth sharing: ..