manofcolombia

Members
  • Posts

    40
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

manofcolombia's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. I've been consolidating 2 arrays into 1 by mounting the 2nd arrays drives as unassigned drives and then rcloning the data into the array. This starts off pretty good and I get ~200MB/s writes when using 16 transfers concurrently. These are all large media files so not a ton of itty bitty things happening. After about 10-15 minutes, the same transfer (sometimes on the same files depending on size) will drop to 30-50MB/s. The destination of the copy is all going to 1 drive on the array because of high-water, but it still started off fine at ~200MB/s and it just progressively gets slower till it stabilizes at 30-50MB/s. The destination of the copy is not going to any cache pool so its not like its filling cache and then switching to array. I also noticed this with my parity checks. At the beginning of the parity check, its flying at ~200+MB/s and then by the end the average is ~50MB/s and its taken ~3 days. I thought maybe it was drive bay or HBA cable, but I swapped drive bays today and still seeing the same behavior. Cache is already being used for vms and docker so its not like there are competing writes from those. stylophora-diagnostics-20240108-1422.zip
  2. Thanks for the confirmation to set my mind at ease!
  3. Hey @ich777 I was pointed in your direction for this question: I'm currently still running 6.8.2 with the old linuxserver nvidia build. I'm looking to finally upgrade to meet the latest stable unraid, however, in the past, I've always reverted back to the vanilla build before upgrading. The old plugin no longer works/supported so I am curious if you would suggest trying to get my hands on a vanilla build of 6.8.2 before attempting to upgrade to 6.11 and/or any other gotchas you might think of regarding my current situation? Thanks
  4. Needless to say, its been quite a long time since I've upgraded unraid... I am curious if anyone has done such a big jump in versions or if it is suggested to upgrade in smaller steps. The thing I am most worried about is that I am still running an old nvidia build from @linuxserver.io and I can no longer revert back to the stock image (easily?) to do a more vanilla upgrade.
  5. All good man. Stuff happens. Thanks for taking a look. I confirmed that the lastest build is working as it used to and the db and related files is now being written out to my volume mount. Thanks
  6. @Stark any luck? Container restarted again and all its config is gone again
  7. Within the last 2 weeks, I started having issues with the hosts/schedules being wiped after restarting/updating the container. The only log that seemed interesting is this: 2019-03-03 15:03:13.506 - INFO - [DavosApplication] - No active profile set, falling back to default profiles: default Assuming that the config isn't being saved to persistent location so it goes back to a default state. I removed the image and all appdata to start over and found that the container does appear to write its persistent data to /config (mapping) anymore. Based on the last log message I have written to volume mapping, this started on 2/10
  8. Yea I do. Between what I pull into grafana and ELK and don't have too much need for the built in notifications. And I log in basically daily.
  9. # If you don't want the output of a cron job mailed to you, you have to direct # any output to /dev/null. We'll do this here since these jobs should run # properly on a newly installed system. If a script fails, run-parts will # mail a notice to root. # # Run the hourly, daily, weekly, and monthly cron jobs. # Jobs that need different timing may be entered into the crontab as before, # but most really don't need greater granularity than this. If the exact # times of the hourly, daily, weekly, and monthly cron jobs do not suit your # needs, feel free to adjust them. # # Run hourly cron jobs at 47 minutes after the hour: 47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null # # Run daily cron jobs at 4:40 every day: 40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null # # Run weekly cron jobs at 4:30 on the first day of the week: 30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null # # Run monthly cron jobs at 4:20 on the first day of the month: 20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null 0 5 * * 6 /usr/local/emhttp/plugins/ca.backup2/scripts/backup.php &>/dev/null 2>&1
  10. I would say that this is incorrect. As I could have other monitoring in place - which I do - and/or if I select a different notification agent, such as push bullet, I could/would want to disable email notification so I wouldn't be getting duplicate notifications.
  11. Here you go root@stylophora:~# cat /etc/cron.d/root # Generated docker monitoring schedule: 10 0 * * * /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php check &> /dev/null # Generated system monitoring schedule: */1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null # Generated mover schedule: 0 0 * * * /usr/local/sbin/mover &> /dev/null # Generated parity check schedule: 0 6 5 * * /usr/local/sbin/mdcmd check &> /dev/null || : # Generated plugins version check schedule: 10 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null # Generated unRAID OS update check schedule: 11 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/unraidcheck &> /dev/null # Generated cron settings for docker autoupdates 0 4 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateDocker.php >/dev/null 2>&1 # Generated cron settings for plugin autoupdates 0 3 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplications.php >/dev/null 2>&1 # Generated ssd trim schedule: 0 6 * * * /sbin/fstrim -a -v | logger &> /dev/null root@stylophora:~# cat /boot/config/plugins/dynamix/dynamix.cfg [display] date="%c" number=".," scale="-1" tabs="1" users="Tasks:3" resize="0" wwn="0" total="1" usage="1" banner="image" dashapps="icons" theme="black" text="1" unit="C" warning="70" critical="90" hot="45" max="55" font="" header="" [notify] entity="1" normal="1" warning="1" alert="1" unraid="1" plugin="1" docker_notify="1" report="1" display="0" date="d-m-Y" time="H:i" position="top-right" path="/tmp/notifications" system="*/1 * * * *" unraidos="11 0 * * *" version="10 0 * * *" docker_update="10 0 * * *" status="" [parity] mode="3" hour="0 6" write="" day="1" dotm="5"
  12. A few times a week, I find this segfault for python in my logs. I believe this has only started since upgrading to 6.6.6. Jan 8 08:33:21 stylophora kernel: python[5319]: segfault at 0 ip 000014c312776f59 sp 000014c303809e10 error 6 in libpython2.7.so.1.0[14c31270c000+343000] Jan 8 08:33:21 stylophora kernel: Code: fd 29 fd ff e8 38 2d fd ff 48 89 c7 31 c0 48 85 ff 74 4a 48 8b 75 18 4c 8b 47 18 31 c9 48 8b 55 10 48 39 d1 7d 10 48 8b 04 ce <48> ff 00 49 89 04 c8 48 ff c1 eb e7 48 c1 e2 03 48 03 57 18 48 8b stylophora-diagnostics-20190108-1911.zip
  13. Since upgrading to 6.6.6, I see this log message every day around the same time: Jan 8 06:00:13 stylophora sSMTP[4074]: Creating SSL connection to host Jan 8 06:00:13 stylophora sSMTP[4074]: SSL connection using ECDHE-RSA-CHACHA20-POLY1305 Jan 8 06:00:13 stylophora sSMTP[4074]: Authorization failed (535 5.7.8 https://support.google.com/mail/?p=BadCredentials g25sm28692448qki.29 - gsmtp) I had never previously setup email notifications for anything so I went and took a look at the notifications page in settings and found that email notifications are all unchecked, however, the default gmail smtp server info is there and with no way to disable it. There is no option to disable this in "preset service" and if you choose custom service and blank out all the options, it will not let you save the form. Hitting reset will revert to default gmail preset. Expected outcome - like the rest of the notification services, being able to disable the notification types: stylophora-diagnostics-20190108-1911.zip
  14. Running into this and I believe it is causing some movies to be constantly missing, even though they are local to the box where radarr lives: Import failed, path does not exist or is not accessible by Radarr: /home/hd3/manofcolombia/torrents/data/The.Butterfly.Effect.2004.DC.1080p.BluRay.H264.AAC-RARBG My workflow goes like this: Ombi -> radarr -> deluge on seedbox -> filebot on seedbox renames and moves to movies folder on seedbox -> davos sftp seedbox movie folder to on prem movie folder -> Plex scans and imports. The above file path is the file path is the path for deluge downloads on the seedbox so I assume deluge is sending back the path. How do I go about handling this without leaving a ton of data on the seedbox, and without being able to mount the seedbox path into the radarr container on prem.