Jump to content

cowboytronic

Members
  • Content Count

    19
  • Joined

  • Last visited

Community Reputation

0 Neutral

About cowboytronic

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Many docker images seem to suffer from memory leaks. I recently had trouble with Transmission and after some digging found a way to apply a memory limit. The docker command line can accept a --memory=xG flag, but it's not obvious in UnRAID where to add this. There were two things to figure out: 1. what flag to add, and 2. where to add the flag I kept trying to use the "Add another Path, Port, Variable, Label or Device" and couldn't get it to work right. Eventually I realized that I could toggle to Advanced View and put it into "Extra Parameters:" which I didn't previously realize was the place to add command line flags. It would be nice if there was a standard field or control in the docker container edit window for putting in a memory limit, and it could just map to adding that flag. I think a lot of people would want to use this.
  2. I saw the same thing, Transmission memory leak over time. My server would become unresponsive every so often and I'd have to reboot. Turns out it was Transmission chewing up tons of memory, as can be seen in Grafana. I fixed this with a recommendation found on these forums. What I couldn't figure out at first was *where* to put in the '--memory=xG' limit I wanted in place. I had to flip the toggle switch for "Basic View / Advanced View" in the upper right, while editing the Transmission Docker. Then there was an empty field for "Extra Parameters:" where I could put in --memory=4G I'll let this run a while and see if the system stays stable, and if Transmission can function still when it hits the limit I put on it.
  3. Thanks! This works, and it was really easy to replicate your script to my setup. I agree it would be nice to log more often than 1 minute intervals, but for long term power consumption trends this is good enough for me. Certainly better than the zero logging I had before.
  4. No not yet. It's easy to see the UPS power info I want at the command line with `apcaccess` but I don't know how to get the info into telegraf -> InfluxDB -> Grafana. Yeah, I have auto-update for Docker containers, and every so often I lose all of my (non-HDD) temp sensors. When I notice a blank spot in my temperature chart I have to do this: docker exec -ti telegraf /bin/sh apk update apk add lm_sensors Which is annoying. Anybody know how to either make lm_sensors stay added permanently, or auto-run these commands on any update?
  5. I also got started using this based on the Reddit thread https://www.reddit.com/r/unRAID/comments/7c2l2w/howto_monitor_unraid_with_grafana_influxdb_and/ This is really great, and the FAQ on that thread is super helpful. I've got some nice graphs going including disk and system temperatures, cpu and network activity, etc. Try to guess when I did my parity check. Now I want to log the load on my UPS to get an idea of power consumption over time of my server rack. There's probably an efficient way to do this, but I'm just not sure what to do. As an example, here's what 'apcaccess' puts out: root@Tower:~# apcaccess APC : 001,032,0751 DATE : 2018-04-08 21:19:38 -0700 HOSTNAME : Tower VERSION : 3.14.14 (31 May 2016) slackware UPSNAME : Tower CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2018-04-08 21:10:06 -0700 MODEL : CP1500PFCLCD STATUS : ONLINE LINEV : 120.0 Volts LOADPCT : 8.0 Percent BCHARGE : 100.0 Percent TIMELEFT : 94.0 Minutes MBATTCHG : 10 Percent MINTIMEL : 10 Minutes MAXTIME : 0 Seconds OUTPUTV : 120.0 Volts DWAKE : -1 Seconds LOTRANS : 88.0 Volts HITRANS : 139.0 Volts ALARMDEL : 30 Seconds NUMXFERS : 0 TONBATT : 0 Seconds CUMONBATT: 0 Seconds XOFFBATT : N/A SELFTEST : NO STATFLAG : 0x05000008 SERIALNO : 000000000000 NOMINV : 120 Volts NOMPOWER : 900 Watts END APC : 2018-04-08 21:19:39 -0700 Here is what's shown in the UPS Settings plugin: Interestingly this displays the load in Watts, but none of the data I can get out of any of the UPS tools display the current load. I can only assume that I'd need to calculate the actual load in Watts from LOAD = LOADPCT * NOMPOWER * ( LINEV / NOMINV ) So what's the best way to get this data into telegraf? The two things I can think of are: 1. have something inside the telegraf docker call the 'apcaccess' command and parse the result 2. have something inside the telegraf docker parse the data from some existing log file (if it exists) To do #1 (run command to get data) I think I'd need to grant more access to telegraf, which I'm not sure I should do. It already has read access to '/' mounted at '/rootfs' but it can't execute commands on the root file system. root@Tower:~# docker exec -ti telegraf /bin/sh / # /rootfs/sbin/apcaccess /bin/sh: /rootfs/sbin/apcaccess: not found / # ls /rootfs/sbin/apc* /rootfs/sbin/apcaccess /rootfs/sbin/apctest /rootfs/sbin/apcupsd So it can see it, but not execute it. To do #2 (log from a file) I think I'd need to turn on logging for apcupsd. I looked into this, and it can be done. 1. open /etc/apcupsd/apcupsd.conf in an editor 2. change STATTIME to '1' (or any number other than zero, this is the logging time interval in seconds) 3. change LOGSTATS to 'on' 4. make the output "status" log file exist with `touch /var/log/apcupsd.status` 5. restart apcupsd with `/etc/rc.d/rc.apcupsd restart` The nice thing is that /var/log/apcupsd.status gets recycled on every log write, meaning it is only as long as one dataset, rather than turning into an ever-expanding logfile. The only problem is that after I do this, my syslog gets spammed with UPS data. I'm not sure how to prevent that. I was going to look into how to use [[inputs.logparser]] in telegraf to grab data from that log file, but I turned the logging back off due to the syslog issue. So I'm not sure what the right way to do this is. Any ideas?
  6. It's possible to see an instantaneous snapshot of your UPS data at the command line. root@Tower:~# apcaccess APC : 001,032,0751 DATE : 2018-04-08 21:19:38 -0700 HOSTNAME : Tower VERSION : 3.14.14 (31 May 2016) slackware UPSNAME : Tower CABLE : USB Cable DRIVER : USB UPS Driver UPSMODE : Stand Alone STARTTIME: 2018-04-08 21:10:06 -0700 MODEL : CP1500PFCLCD STATUS : ONLINE LINEV : 120.0 Volts LOADPCT : 8.0 Percent BCHARGE : 100.0 Percent TIMELEFT : 94.0 Minutes MBATTCHG : 10 Percent MINTIMEL : 10 Minutes MAXTIME : 0 Seconds OUTPUTV : 120.0 Volts DWAKE : -1 Seconds LOTRANS : 88.0 Volts HITRANS : 139.0 Volts ALARMDEL : 30 Seconds NUMXFERS : 0 TONBATT : 0 Seconds CUMONBATT: 0 Seconds XOFFBATT : N/A SELFTEST : NO STATFLAG : 0x05000008 SERIALNO : 000000000000 NOMINV : 120 Volts NOMPOWER : 900 Watts END APC : 2018-04-08 21:19:39 -0700 I'm crating historical graphs of various things on my system (disk and system temperatures, CPU and network activity, etc) using Grafana/InfluxDB/Telegraf based on the instructions here: https://www.reddit.com/r/unRAID/comments/7c2l2w/howto_monitor_unraid_with_grafana_influxdb_and/ What I can't figure out is how to get Telegraf to grab the info from 'apcaccess' and stuff it into the DB for Grafana to display.
  7. Updated from 6.3.5 to 6.4.0 and it was pretty painless. I clicked the update prompt at the bottom, then manually rebooted. It seemed to boot back up in 6.3.5 once, then the server disappeared shortly after, then when it came back it was 6.4.0. I wasn't in the same room as the server to hear when it actually rebooted. So one confusing bit, but otherwise it worked well. Thanks!
  8. Thanks! I'll give this a try. Edit: so far so good imported about 50 artists to test. However, in a scan of the larger collection, there are quire a few artists which won't match. Which music DB is being used to match against? Edit2: my first download attempt successfully snatched but wouldn't import the album. Turns out the template doesn't include /downloads, so I added a mapping for that and it successfully imported the downloaded album.
  9. I, too, am trying to recover from having my plex folder backed up. I've stopped running backups due to my system becoming unresponsive. Right now I'm just trying to delete the old backups first so I can try to get back to a state where I can run backups again (minus plex, of course). I'm probably also going to have to go the route where I move everything but backups off of a disk, pull the disk, shrink the array. The backup directory just can't be killed with 'rm -rf' or 'find -delete' without freezing my system leading to an unclean shutdown.
  10. Thanks for all the great feedback. You got me to do a bunch of research on SATA cards and I realized my Syba PCIe cards have the Marvell chipsets that are notorious for creating silent corruption. I just bought a Dell H310 card to flash as an LSI 9211-8i, and two SFF to SATA breakout cables, all for ~$50 on eBay. What a bargain! My read/write speed was appearing to choke around 26 MB/s. I think that having two of those Syba cards with 4x drives on each may be the bottleneck causing my server to randomly freeze from time to time, or to cause Plex to buffer sometimes. I'll see after I get the new card and move the drives onto it.
  11. I've had this issue several times recently, and I've yet to figure out what is causing it. I'll be watching a video on Plex and somewhere in the middle or after the halfway point it will start buffering. The Plex Media Player log has lines like this: 2017-07-27 22:06:03 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:04 [ DEBUG ] PlayerComponent.cpp @ 526 - cache: Cache is not responding - slow/stuck network connection? 2017-07-27 22:06:04 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:05 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:06 [ DEBUG ] PlayerComponent.cpp @ 526 - cache: Cache is not responding - slow/stuck network connection? 2017-07-27 22:06:06 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:07 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:08 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:09 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:09 [ INFO ] JS: %c[Timeline] playing, 1669751/1978977 2017-07-27 22:06:10 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:11 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:12 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:13 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:13 [ DEBUG ] PlayerComponent.cpp @ 526 - cplayer: Enter buffering. 2017-07-27 22:06:13 [ INFO ] PlayerComponent.cpp @ 399 - Entering state: buffering 2017-07-27 22:06:13 [ INFO ] JS: %c[Player] 0% buffered 2017-07-27 22:06:13 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:13 [ INFO ] JS: %c[Timeline] buffering, 1673588/1978977 2017-07-27 22:06:13 [ INFO ] JS: %c[PlayerController] State change buffering 2017-07-27 22:06:13 [ DEBUG ] PowerComponent.cpp @ 53 - Enabling OS screensaver 2017-07-27 22:06:13 [ INFO ] JS: %c[Player] 4% buffered 2017-07-27 22:06:14 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:15 [ INFO ] JS: %c[Player] 8% buffered 2017-07-27 22:06:15 [ INFO ] JS: %c[Player] 19% buffered 2017-07-27 22:06:16 [ DEBUG ] PlayerComponent.cpp @ 526 - cache: Cache is not responding - slow/stuck network connection? 2017-07-27 22:06:19 [ INFO ] JS: %c[Timeline] buffering, 1673588/1978977 2017-07-27 22:06:23 [ DEBUG ] PlayerComponent.cpp @ 526 - cplayer: End buffering (waited 10.122276 secs). 2017-07-27 22:06:23 [ INFO ] PlayerComponent.cpp @ 395 - Entering state: playing 2017-07-27 22:06:23 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:23 [ INFO ] JS: %c[Timeline] playing, 1673588/1978977 2017-07-27 22:06:23 [ INFO ] JS: %c[PlayerController] State change playing 2017-07-27 22:06:23 [ DEBUG ] PowerComponent.cpp @ 58 - Disabling OS screensaver 2017-07-27 22:06:23 [ INFO ] JS: %c[Commands] Executing persistPlayQueue 2017-07-27 22:06:24 [ DEBUG ] PlayerComponent.cpp @ 526 - cache: Cache is not responding - slow/stuck network connection? While in the Plex Media Server log I'll see messages like this around the same time: Jul 27, 2017 22:06:02.733 [0x2b40d6600700] VERBOSE - We didn't receive any data from 192.168.1.62:39513 in time, dropping connection. Jul 27, 2017 22:06:06.010 [0x2b40d6600700] VERBOSE - We didn't receive any data from 192.168.1.62:39515 in time, dropping connection. My configuration is: Pentium G4620 (Kaby Lake) @ 3.70GHz 16GB DDR4 2400 MHz Plex Media Server docker (linuxserver.io) latest Plex Pass version Plex Media Player on a Mac mini, latest Plex Pass version Hard-wired gigabit ethernet from server -> EdgeRouterX -> client There are several other docker containers running (Plex, PlexPy, Radarr, Sonarr, Jackett, Deluge, Transmission, Observium, cAdvisor) but I've never seen any of them chew up significant resources for more than a few seconds. I'm direct streaming the video. The most recent one to get stuck buffering had a bitrate of 3653 kbps, but I've had the same issue with higher or lower bitrate videos. I've also direct streamed 12 Mbps vidos without issue. When the buffering starts, I'll quickly try to SSH in, run diagnostics and see what's up, but I'm not seeing any obvious clues. The web UI will often lag for a minute or so around these times. I can perform an iperf test between the client and server and see ~940 MBit/s consistently. I can run 'htop' and 'iftop' and 'iotop' on the server and nothing seems to be chewing up resources. Then things will mysteriously go back to normal after a few minutes and playback will be fine. tower-diagnostics-20170727-2208.zip
  12. I'm planning on building a new server that is quieter and more power efficient than my existing one. The only parts I'm going to keep from my existing UnRAID build are the disks (8x HDD, 1x SSD) and perhaps the Syba PCIe x1 to SATA x4 cards if the new motherboard doesn't have enough ports. Is there anything special I need to do when I move the disks over to make sure UnRAID correctly maps which SATA port is which disk (data, parity, cache) in the correct order? Or will it figure that out on boot regardless of which port each disk is connected to? Also, the motherboard I pick probably won't have 9x SATA ports for all of my disks. I'll re-use my Syba PCIe/SATA cards for that. I'm sure I'd want to put the SSD cache on any of the fastest SATA ports (gen3) provided by the motherboard chipset, and I assume I'd want to also put the 2x parity disks on higher bandwidth SATA ports as well, only putting the data disks on the slower SATA expansion cards. The parity disks will see more read/write traffic than any of the individual data drives, right?
  13. Has anybody figured out how to get SMART disk temps to be reported to Observium via SNMP? I'm seeing basically what others in the thread have seen: running the commands manually work and report disk temps, but the temps not showing up in Observium. root@Tower:~# /usr/local/emhttp/plugins/snmp/drive_temps.sh ST5000LM000-2AN170_WCJ01K50: 29 ST5000LM000-2AN170_WCJ02FA8: 29 ST5000LM000-2AN170_WCJ01JAC: 29 ST5000LM000-2AN170_WCJ02A77: 29 ST5000LM000-2AN170_WCJ02N74: 30 ST5000LM000-2AN170_WCJ01GDL: 29 ST5000LM000-2AN170_WCJ02ANT: 27 ST5000LM000-2AN170_WCJ02QK7: 30
  14. As suggested by rednoah's response on this thread at the FileBot forums, I did a 'stat' of the input and output paths. He says if they're on different filesystems that you can't do a hardlink. root@a95fc6dbbd67:/# stat /downloads File: '/downloads' Size: 4096 Blocks: 16 IO Block: 131072 directory Device: 21h/33d Inode: 7 Links: 1 Access: (0777/drwxrwxrwx) Uid: ( 99/user_99_100) Gid: ( 100/ users) Access: 2017-04-17 23:42:15.743366000 -0700 Modify: 2017-04-20 09:55:52.504612122 -0700 Change: 2017-04-20 09:55:52.504612122 -0700 Birth: - root@a95fc6dbbd67:/# stat /output File: '/output' Size: 16 Blocks: 0 IO Block: 131072 directory Device: 21h/33d Inode: 12 Links: 1 Access: (0777/drwxrwxrwx) Uid: ( 99/user_99_100) Gid: ( 100/ users) Access: 2017-04-20 04:00:36.000000000 -0700 Modify: 2017-04-20 11:05:42.880575241 -0700 Change: 2017-04-20 11:05:42.880575241 -0700 Birth: - root@a95fc6dbbd67:/# stat /downloads/Malcolm.In.The.Middle.S01.* File: '/downloads/Malcolm.In.The.Middle.S01.NTSC.DVD.DD2.0.x264-DON' Size: 4096 Blocks: 8 IO Block: 131072 directory Device: 21h/33d Inode: 1531778 Links: 1 Access: (0775/drwxrwxr-x) Uid: ( 99/user_99_100) Gid: ( 100/ users) Access: 2017-04-20 09:55:52.001074093 -0700 Modify: 2017-04-20 09:55:52.003074098 -0700 Change: 2017-04-20 09:55:52.003074098 -0700 Birth: - root@a95fc6dbbd67:/# stat "/output/TV Shows/Malcolm in the Middle/Season 01" File: '/output/TV Shows/Malcolm in the Middle/Season 01' Size: 0 Blocks: 0 IO Block: 131072 directory Device: 21h/33d Inode: 1538039 Links: 1 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2017-04-20 11:05:42.881575246 -0700 Modify: 2017-04-20 11:05:42.881575246 -0700 Change: 2017-04-20 11:05:42.881575246 -0700 Birth: - I'm not sure how to interpret the output. Is it possible, inside a docker container, to do this?
  15. I'm having a problem where I can't use the hardlink action. I can move, copy, etc, but not hardlink. It gives me an error about an "Invalid cross-device link" root@a95fc6dbbd67:/# filebot -rename /downloads/Malcolm.In.The.Middle.S01.* --db TheTVDB --mode interactive --format "{plex} [{vf}, {vc}, {ac}, {af}]" --output=/output --action hardlink Rename episodes using [TheTVDB] Auto-detected query: [Malcolm in the Middle] Fetching episode data for [Malcolm in the Middle] [HARDLINK] From [/downloads/Malcolm.In.The.Middle.S01.NTSC.DVD.DD2.0.x264-DON/Malcolm.in.the.Middle.S01E01.Pilot.NTSC.DVD.DD2.0.x264-DON.mkv] to [/output/TV Shows/Malcolm in the Middle/Season 01/Malcolm in the Middle - S01E01 - Pilot [480p, x264, AC3, 2ch].mkv] [HARDLINK] Failure: java.nio.file.FileSystemException: /output/TV Shows/Malcolm in the Middle/Season 01/Malcolm in the Middle - S01E01 - Pilot [480p, x264, AC3, 2ch].mkv -> /downloads/Malcolm.In.The.Middle.S01.NTSC.DVD.DD2.0.x264-DON/Malcolm.in.the.Middle.S01E01.Pilot.NTSC.DVD.DD2.0.x264-DON.mkv: Invalid cross-device link Processed 0 files /output/TV Shows/Malcolm in the Middle/Season 01/Malcolm in the Middle - S01E01 - Pilot [480p, x264, AC3, 2ch].mkv -> /downloads/Malcolm.In.The.Middle.S01.NTSC.DVD.DD2.0.x264-DON/Malcolm.in.the.Middle.S01E01.Pilot.NTSC.DVD.DD2.0.x264-DON.mkv: Invalid cross-device link java.nio.file.FileSystemException: /output/TV Shows/Malcolm in the Middle/Season 01/Malcolm in the Middle - S01E01 - Pilot [480p, x264, AC3, 2ch].mkv -> /downloads/Malcolm.In.The.Middle.S01.NTSC.DVD.DD2.0.x264-DON/Malcolm.in.the.Middle.S01E01.Pilot.NTSC.DVD.DD2.0.x264-DON.mkv: Invalid cross-device link at net.filebot.util.FileUtilities.createHardLinkStructure(FileUtilities.java:140) at net.filebot.StandardRenameAction$5.rename(StandardRenameAction.java:75) at net.filebot.cli.CmdlineOperations.renameAll(CmdlineOperations.java:619) at net.filebot.cli.CmdlineOperationsTextUI.renameAll(CmdlineOperationsTextUI.java:94) at net.filebot.cli.CmdlineOperations.renameSeries(CmdlineOperations.java:244) at net.filebot.cli.CmdlineOperations.rename(CmdlineOperations.java:97) at net.filebot.cli.ArgumentProcessor.runCommand(ArgumentProcessor.java:83) at net.filebot.cli.ArgumentProcessor.run(ArgumentProcessor.java:26) at net.filebot.Main.main(Main.java:115) Failure (°_°) Has anybody seen this?