mcai3db3

Members
  • Posts

    56
  • Joined

  • Last visited

Everything posted by mcai3db3

  1. I mean, whatever you need to do to achieve it would be great. I have zero understanding of the technicalities involved in this, so I would appreciate anything you can conjure up!
  2. Unfortunately you can't do anything other than power limit natively. And nothing in this Docker that allows you to set anything else... currently. @birdwatcher posted above about the NSFMiner Docker having these settings, but I had no joy getting that to work. As far as running the script each time you restart Unraid goes, I just have it run as a script within the 'User Scripts' plugin, and tell it to run every time the server starts up. IMO the best route at the moment to maximize MH/efficiency/temps is to create a Windows VM and host your miner there. But there are plenty of pitfalls with VMs & IOMMU. I could only get 3 of my 5 pcie ports to work on a VM, so I'm running a VM and a Docker, and crossing my fingers that PTRFRLL will manage to get fan/clock settings into this docker some day.
  3. Run ‘nvidia-smi -pm 1’ first to enable persistence mode. Then run your power limit command as above.
  4. Apologies, I hadn't read the responses when I made my post. I looked at the docker you mentioned, but had several issues attempting to get it to run, due to driver incompatibilities (as you mentioned). Additionally it doesn't appear to want me to use the GPU in my Plex Docker. I think it is also far from ideal to need to install a docker for each card just to configure these settings. I'm still running this PTRFRLL Docker, as it allows me to mine with my Plex card, but for the rest of my cards, in my opinion, it is far more advantageous to run them in a VM at this time. It would be quite magical if OC settings could be included in this docker, as I could drop this resource intensive VM, and I could afford to use a much heartier GPU in Plex/tdarr.
  5. All you can do in unraid is limit the power with a nvidia-smi command. So 'nvidia-smi -pl xx' where xx is the power percentage. This is why I moved my setup to a Windows VM. For all cards other than the one I use for Plex transcoding.
  6. Thank you for the guide @dboris, I'm about to try this myself. I was wondering what happens if I just load the default NH image rather than opening up the file and tweaking the BTC address. Is there no way to just update this in the OS itself? Apologies for the laziness/naivety, I'm just curious.
  7. Update: I "inspected" the HTML source in Firefox, removed the Regex validation, and got my dockers started without issue. So something is awry with that forms validation... (edit: or I'm doing something that I'm not supposed to be doing?)
  8. I updated to 6.9.0, and now when I go to start Docker (having stopped it to install the Nvidia plugin, ala what SpaceInvader told me to do), all that happens is the Docker vDisk Location field goes red, despite the contents ending in .img "/mnt/cache/Docker/docker4.img". I haven't changed any of the settings. Any ideas? I'm hoping this is a stupid typo, but right now it is immensely frustrating. Edit: to add, I have tried in multiple browsers, same issue. Nothing is logged when I click Apply, as the button is just disabled by this delightful form validation.
  9. I had the same issue. I replaced pretty much all my hardware since, and I haven’t seen the issue since. Idk but maybe it’s a network driver issue or something, all I know is that it annoyed me to the point of spending hundreds of dollars on a new mobo/cpu/ram.
  10. Did you ever work this out? I'm having similar issues and it's obscenely frustrating. I've attached diagnostics, but basically my docker/plugins/main/dashboard pages either time out, or load exceptionally slowly. And several of my dockers are performing terribly too. On 6.8.3. I've tried several things, like reinstalling Unraid files, changing docker image, removing a bunch of plugins/dockers. And no improvement. Aug 9 12:35:45 xxx nginx: 2020/08/09 12:35:45 [error] 17148#17148: *20054 upstream timed out (110: Connection timed out) while reading upstream, client: xxx, server: , request: "GET /Docker HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "xx.unraid.net", referrer: "https://xx.unraid.net/Docker" tower-diagnostics-20200809-1240.zip
  11. Consider me a +1 on this one. I can't use the TVHProxy plugin because of the requirement to set IP addresses for each Docker. This would be great.
  12. So basically, does anyone have a suggestion on how to force phaze plugins to wait until the SSD is ready before trying to start up? Is there a way of putting a forced delay in? (I'm not even sure if this'll work, given it could delay UD too). Thanks! Anyone have any suggestions?
  13. This is a crosspost, as I'm not sure if it's an Unassigned Devices issue, or a plugin issue (of which all of my plugins are Phaze plugins), so here goes: http://lime-technology.com/forum/index.php?topic=45807.msg465130#msg465130 So basically, does anyone have a suggestion on how to force phaze plugins to wait until the SSD is ready before trying to start up? Is there a way of putting a forced delay in? (I'm not even sure if this'll work, given it could delay UD too). Thanks!
  14. By plugin's, do you mean Plugins or Docker Applications? If docker Applications, then you have to stop and start the entire docker service in order for any apps to see any mountings that UD performs (6.1.x) If you're under 6.2, then you can modify your volume mountings to use the new "slave" option which should fix this up for you. Just plugins I'm afraid. I've never ventured into the world of Docker.
  15. Howdy all... I'm hoping this is a common issue, or me being stupid somehow, but here goes: My setup is basically that I have 1 "unassigned device" SSD, and on that disk, I install my various plugins (sabnzbd et al.) But I'm having this issue where every single time I restart the server/array, the plugins all run their annoying "wizards" as they are failing to read the config folder held on my SSD. I can see in my logs that the SSD (sdd1) is not mounting until after the plugins have tried to install. So basically, how do I force UD to mount this disk before the plugins try and run? Thanks! And thanks for keeping this plugin alive dlandon! p.s. both UnRAID and UD are completely up-to-date.
  16. Yes HDD, but with an extra SSD for my apps.
  17. 10 drives 4tb parity 2 * 4tb data 4 * 2tb data 1 * 1tb data 1 * 500gb cache 1 * 120gb ssd - outside of array as application drive
  18. I've already run it on disk1 and the cache disk. I had errors on the cache disk, which have since been fixed, but the problem persists. I can try doing it on the other disks, but I'm pretty sure the issue exists between these two drives. Another attempt I made to fix this was to define minimum free space on each share, and to delete all drive names from the "included/not included" parts of the settings. This still didn't work. Every time the mover runs, this error happens, and my system locks up and requires a hard restart. This is not ideal. I want to use the cache disk for the purpose of installing plugins, but would be happy to disable it from "cache" usage. Unfortunately my attempts to do this are failing, as I've told each Share not to use the cache.. and yet it continues to do so. EDIT: Looks like it actually deleted the files it was trying to move. Great!
  19. I believe I'm in the same boat on this one. Did you ever get a fix? (http://pastebin.com/4diqSEDA)
  20. The prices will find their way back down again, it just is gonna take a while. Right now they're loving the profits, but soon enough one manufacturer will undercut another and prices will have to drop to stay competitive. Though, it would not shock me if we're looking at a year or more before that happens.
  21. Mine have arrived! Ha, took some time but they're finally here. Now I've no need to install them right now... has anyone else taken the plunge?
  22. You have any issues w/ movie skipping? I noticed this when I just wake up the server and start playing the movie from the boxee. I then will have to stop the movie and start over after that its smooth as butter. The loading circle thing appears when I'm waking up server HDDs, which is expected, but I haven't had any skipping issues. Connected to the server using smb links to UnRAID shares, on unRAID 4.7 and the most up-to-date version of Boxee (well, technically not, since they updated yesterday and I'm yet to turn it on since).
  23. I have a Boxee Box playing files from my unRAID server and it's been nothing but a treat. I barely use it's online capabilities to stream things, so perhaps that's an area that isn't getting such rave reviews? As a local media streaming device w/scraping capabilities, it's been perfect.
  24. I've not got that syslog anymore, but I can tell you exactly what I can do to cause such an error...! Earlier today I will have got that message, it was rebuilding a drive I removed, onto a new drive. So the array was 'Starting' at this point. Then I went into the 'Shares' tab of the unraid management page. I made a change to one of the shares, adding a disk to a share or something like that. At this point when that page refreshes, all but one of my Shares shows on the Shares setup page, the rest disappear. Once the array is started up, only the one remaining Share is findable on the network. So, at that point I decide I want to reboot the thing, so I can have my Shares back!... OK, so I disconnect everything from the server, and then I try and stop the array. Not so fast. UnRAID says no. It basically just gets stuck on Unmounting for multiple drives, until I end up powering down with UnMenu. When I restart, all is fine. I get that IRQ message during that whole affair. Right now I'm in no mood to be fiddling around with the box, it's been working for 4/5 days and I just don't want to mess with it unless it proves itself to be an issue. Oh, and the only reason I was suspicious over my cache drive was due to frequent inability to unmount said drive. But that could've been caused by anything. It was also a convenient way to remove my 500GB Cache and replace it with a 2TB data drive that I just bought (and downgrade a 1TB data drive to cache duty).