manofcolombia

Members
  • Posts

    36
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

manofcolombia's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. All good man. Stuff happens. Thanks for taking a look. I confirmed that the lastest build is working as it used to and the db and related files is now being written out to my volume mount. Thanks
  2. @Stark any luck? Container restarted again and all its config is gone again
  3. Within the last 2 weeks, I started having issues with the hosts/schedules being wiped after restarting/updating the container. The only log that seemed interesting is this: 2019-03-03 15:03:13.506 - INFO - [DavosApplication] - No active profile set, falling back to default profiles: default Assuming that the config isn't being saved to persistent location so it goes back to a default state. I removed the image and all appdata to start over and found that the container does appear to write its persistent data to /config (mapping) anymore. Based on the last log message I have written to volume mapping, this started on 2/10
  4. Yea I do. Between what I pull into grafana and ELK and don't have too much need for the built in notifications. And I log in basically daily.
  5. # If you don't want the output of a cron job mailed to you, you have to direct # any output to /dev/null. We'll do this here since these jobs should run # properly on a newly installed system. If a script fails, run-parts will # mail a notice to root. # # Run the hourly, daily, weekly, and monthly cron jobs. # Jobs that need different timing may be entered into the crontab as before, # but most really don't need greater granularity than this. If the exact # times of the hourly, daily, weekly, and monthly cron jobs do not suit your # needs, feel free to adjust them. # # Run hourly cron jobs at 47 minutes after the hour: 47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null # # Run daily cron jobs at 4:40 every day: 40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null # # Run weekly cron jobs at 4:30 on the first day of the week: 30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null # # Run monthly cron jobs at 4:20 on the first day of the month: 20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null 0 5 * * 6 /usr/local/emhttp/plugins/ca.backup2/scripts/backup.php &>/dev/null 2>&1
  6. I would say that this is incorrect. As I could have other monitoring in place - which I do - and/or if I select a different notification agent, such as push bullet, I could/would want to disable email notification so I wouldn't be getting duplicate notifications.
  7. Here you go root@stylophora:~# cat /etc/cron.d/root # Generated docker monitoring schedule: 10 0 * * * /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php check &> /dev/null # Generated system monitoring schedule: */1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null # Generated mover schedule: 0 0 * * * /usr/local/sbin/mover &> /dev/null # Generated parity check schedule: 0 6 5 * * /usr/local/sbin/mdcmd check &> /dev/null || : # Generated plugins version check schedule: 10 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null # Generated unRAID OS update check schedule: 11 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/unraidcheck &> /dev/null # Generated cron settings for docker autoupdates 0 4 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateDocker.php >/dev/null 2>&1 # Generated cron settings for plugin autoupdates 0 3 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplications.php >/dev/null 2>&1 # Generated ssd trim schedule: 0 6 * * * /sbin/fstrim -a -v | logger &> /dev/null root@stylophora:~# cat /boot/config/plugins/dynamix/dynamix.cfg [display] date="%c" number=".," scale="-1" tabs="1" users="Tasks:3" resize="0" wwn="0" total="1" usage="1" banner="image" dashapps="icons" theme="black" text="1" unit="C" warning="70" critical="90" hot="45" max="55" font="" header="" [notify] entity="1" normal="1" warning="1" alert="1" unraid="1" plugin="1" docker_notify="1" report="1" display="0" date="d-m-Y" time="H:i" position="top-right" path="/tmp/notifications" system="*/1 * * * *" unraidos="11 0 * * *" version="10 0 * * *" docker_update="10 0 * * *" status="" [parity] mode="3" hour="0 6" write="" day="1" dotm="5"
  8. A few times a week, I find this segfault for python in my logs. I believe this has only started since upgrading to 6.6.6. Jan 8 08:33:21 stylophora kernel: python[5319]: segfault at 0 ip 000014c312776f59 sp 000014c303809e10 error 6 in libpython2.7.so.1.0[14c31270c000+343000] Jan 8 08:33:21 stylophora kernel: Code: fd 29 fd ff e8 38 2d fd ff 48 89 c7 31 c0 48 85 ff 74 4a 48 8b 75 18 4c 8b 47 18 31 c9 48 8b 55 10 48 39 d1 7d 10 48 8b 04 ce <48> ff 00 49 89 04 c8 48 ff c1 eb e7 48 c1 e2 03 48 03 57 18 48 8b stylophora-diagnostics-20190108-1911.zip
  9. Since upgrading to 6.6.6, I see this log message every day around the same time: Jan 8 06:00:13 stylophora sSMTP[4074]: Creating SSL connection to host Jan 8 06:00:13 stylophora sSMTP[4074]: SSL connection using ECDHE-RSA-CHACHA20-POLY1305 Jan 8 06:00:13 stylophora sSMTP[4074]: Authorization failed (535 5.7.8 https://support.google.com/mail/?p=BadCredentials g25sm28692448qki.29 - gsmtp) I had never previously setup email notifications for anything so I went and took a look at the notifications page in settings and found that email notifications are all unchecked, however, the default gmail smtp server info is there and with no way to disable it. There is no option to disable this in "preset service" and if you choose custom service and blank out all the options, it will not let you save the form. Hitting reset will revert to default gmail preset. Expected outcome - like the rest of the notification services, being able to disable the notification types: stylophora-diagnostics-20190108-1911.zip
  10. Running into this and I believe it is causing some movies to be constantly missing, even though they are local to the box where radarr lives: Import failed, path does not exist or is not accessible by Radarr: /home/hd3/manofcolombia/torrents/data/The.Butterfly.Effect.2004.DC.1080p.BluRay.H264.AAC-RARBG My workflow goes like this: Ombi -> radarr -> deluge on seedbox -> filebot on seedbox renames and moves to movies folder on seedbox -> davos sftp seedbox movie folder to on prem movie folder -> Plex scans and imports. The above file path is the file path is the path for deluge downloads on the seedbox so I assume deluge is sending back the path. How do I go about handling this without leaving a ton of data on the seedbox, and without being able to mount the seedbox path into the radarr container on prem.
  11. Still dies at some point after heavy use. But there are no more timeouts in the logs. stylophora-diagnostics-20190102-0651.zip
  12. Just got it happening a couple of times since I tested. But it was literally just happening. Here are the fresh diagnostics The worst thing is, I start getting crazy cpu_io_wait, assuming because plex container expects the path to be there but the path isn't happy. So what I end up having to do is: 1. stop plex - because it normally won't remount if its still running and it'll give me a blank error reason 2. Unmount the share which works sometimes on the first try but can take 30-45+ seconds for the plugin to say its actually unmounted - also sometimes does not work on the first try 2a. Start plex if I don't feel like dealing with #3 and back to normal 3. Mount the NFS share again, very rarely works on the first try 4. Start plex once the share is mounted successfully again stylophora-diagnostics-20181231-1834.zip
  13. Testing it now. Set the share up as a separate library so the setup should be identical now.
  14. Try this then. It should be in there. I see the logs of it not responding. stylophora-diagnostics-20181230-1314.zip