• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Surgikill

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This only fixes the display in unraid. It does not change the value that gets sent to influx db. I would not expect the UPS to report a figure higher than the kill-a-watt. Currently, I have on-board power monitoring for all of my servers. The only non power monitored items in my rack are a non-poe switch, and a modem. If I am to take the power reported from each server and subtract it from the power reported by the UPS, I am left with around 100-150 watts of overhead, which is much more than my switch and modem pull. Power factor correction is also enabled on all of the
  2. Hi all, I have a Tripp Lite SMART1500LCD UPS, and I'm having an issue getting the current wattage to show up correctly. I have a Kill-A-Watt connected between the UPS and the wall. It seems that no matter what I do, I cannot get an accurate reading. Currently, these are my settings for the UPS. The only driver that will work is usbhid, although I believe that the tripplite-usb driver should work. Currently, NUT is showing around 520 watts of power being consumed, while the kill-a-watt is only showing 350. This gets exacerbated when I try t
  3. Any help with this? I really don't want to restart my server seeing as I am away from it for quite a while.
  4. Hey all, I'm having some issues with my log filling up. I found another thread and ran some commands. It seems like I have a directory /var/log/docker.log.1 which is full of files. I also downloaded a dump of some system files. It seems like my other machine keeps trying to mount a share on my unraid machine every 10 seconds or so. I also checked docker container size, and nothing stuck out as being too out of the ordinary. No logs were above 80MB and the largest container was Tdarr at 5.5GB. Any idea what could be filling up my log? Uptime is 57 days. syslog.2.txt
  5. I'm having an issue with this. I am using linuxserver plex and when I map it to /tmp it ends up occupying space in the plex container. Any ideas on this?
  6. I'm having a slight issue. I was originally running elasticsearch 6.X. I got through to the webgui on Diskover and it told me to perform a crawl. Read through the documentation that said I need elasticsearch 5.6.x, so I installed 5.6.16. Deleted previous appdata, installed elasticsearch. Now it runs, I can access the webui of diskover and elasticsearch. No errors in elasticsearch logs, no errors in diskover logs. Warnings in redis about THP and overcommit_memory. Diskover still tells me to run a crawl from the webgui. Not sure where to go from here. Everything looks kosher.
  7. Hey all, I'm looking into buying a 720xd (I need more bays) and I'm in a quandry. I'd like to put a GPU in it to use NVENC transcoding for plex, as well as have it run all my other applications (plex, nginx, nextcloud, bitwarden, etc.). The issues I am having right now is that whenever the array is backing up from cache or moving excessive amounts of data, everything slows to a crawl and IOWAIT spikes. I was thinking I could run a VM on top of unraid, but have that disk for the VM be outside of the array and set it up as a ssd or NVME drive. Then I could mount that disk and use it
  8. So I just randomly started having this issue. Neither Radarr nor Sonarr will copy movie files over to the directory they are supposed to be in after they have finished downloading. They were both working fine the other day, but as of last night they are both now borked. I have restarted the containers and the server. The next option is to reinstall both and start from scratch but I would rather not do that. I have checked settings and create hardlinks is set to no. Any ideas? What could I post to help diagnose this?
  9. So I'm using a 256gb SSD on my unraid setup. I currently have turbo-write enabled. Turbo write will only have all drives spun up during writes, correct? My thinking is the cache drive will deal with all the reads/writes, and then the scheduler will dump the cache every 3 hours to the array, at which time all drives will spin up and it will dump the cache as fast as possible. Is this the way it should work? When reading from the drives there should only be the drive spun up that it is reading the data from, right? I don't want all my drives spinning all the time, but I don't want to deal with t
  10. So I should replace it with container path as seen here? EDIT: I changed it and it worked, so it at least sees the files now, but for whatever reason I can't find any subtitles for anything. Is there a way I can fix that too?
  11. Hey guys. I'm trying to get Bazarr all set up but I'm pretty sure I FUBAR'd something. In the Bazarr webui it says that all of my paths are invalid for every file I have. It also will not find any subtitles for any of the content I have. I have made sure to set the Sonarr/Radarr paths and input my opensubtitles account info. You can see what is set up in my below screenshots.