[Plugin] CA User Scripts


Recommended Posts

 

To add your own scripts:

 

Within the flash drive folder config/plugins/user.scripts/scripts create a new folder (each script is going to have its own folder) - The name doesn't matter but it can only contain the following characters: letters ([A-Za-z]), digits ([0-9]), hyphens ("-"), underscores ("_"), colons (":"), periods ("."), and spaces (" ")

 

I haven't had any luck making a directory with spaces in the name.

Works perfectly for me.

 

Do you mean that you can't outright create the directory or that user scripts isn't recognizing it?  If the latter, is there a file within it called script that is the actual scripts?  (fyi standard windows operation is to hide extensions for known file types.  Beyond being a big security risk (inadvertently clicking on an exe instead of a txt file), its a PITA.  Turn that off within folder settings, view

Link to comment

Great plugin thanks for providing it. I have a question related to the scheduler. Background: I am planning to schedule a clean shutdown every night at 0:30:

#!/bin/bash
#
# Shutdown OSX VM
echo "...shutting down OSX first...."
25 0 * * * ssh [email protected] sudo shutdown -h now
#
# Shutdown unRAID
echo "...finally shutting down unRAID...."
30 0 * * * /usr/local/sbin/powerdown

 

I guess that I have to disable the schedule as shown in the attachment? Or do I have to enable it in order to get the script active at boot time? Thanks for clarifying this.

screenshot.jpg.afdf1d9de3dcc89373e9dfab6bd571a7.jpg

Link to comment

Hi Squid, not sure if you can help me with an issue I am having that has come back up now that powerdown is deprecated for v6.2.

 

In powerdown I had K and S scripts that would mount and remove remote share mounts when the array was stopped and started. Prior to powerdown v2.23 I had issues where my mounts would mysteriously disappear that were solved by simply disabling the K script. The changes dlandon made in v2.23 of his excellent script fixed my issues and my shares stayed mounted without any problems. Now that I have moved to unRAID v6.2 and started using your user scripts plugin my issue has returned. I am randomly losing my remote share mounts after only a day or so of having them mounted. The scripts are set to run on array start and array stop so they shouldn't be executing for any other reason and I don't see any evidence of them running randomly in the system log. I have attached my diagnostics zip to this post.

 

Here is a copy/paste of my post from the powerdown thread that shows what my scripts contain and some other details:

 

Wondering if someone can help me with a behaviour I noticed after setting up a K and S script to mount remote SSHFS shares for my automation setup. I have attached my system log, let

 

Before finding out I could use the powerdown plugin to run script files when the array starts/stops I was manually mounting my remote SSHFS shares each time I started the machine. The SSHFS shares would stay up without issues for weeks until I manually removed them. Now that I have setup a K00 and S00 script I have begun noticing that I randomly seem to be losing some of my SSHFS shares. On Saturday I noticed my automation setup was no longer finding recent downloads and upon checking what was mounted all three of my SSHFS mounts were gone. Then again this afternoon (8/28) I noticed Sonarr wasn't pulling in my downloads and it had just been working about four or five hours ago so I checked mount and sure enough my remote SSHFS share for tv downloads was missing.

 

Now I have no definitive proof that this is being caused by the powerdown plugin, I only have circumstantial guess work based on never having this issue until I created a K00 and S00 script and added them to the powerdown plugin following your instructions. For more information on SSHFS commands I am using check out my post over here: https://lime-technology.com/forum/index.php?topic=50974.msg489503#msg489503

 

I looked through the system log for today and I didn't see the powerdown script initiating the scripts like you normally see, for example:

Aug 25 16:04:26 Node rc.unRAID[13532][13537]: Processing /etc/rc.d/rc.unRAID.d/ start scripts.

Aug 25 16:04:26 Node rc.unRAID[13532][13541]: Running: "/etc/rc.d/rc.unRAID.d/S00.sh"

 

The only odd thing I noticed in the logs from today was apcupsd restarting around 4AM:

Aug 28 04:40:01 Node apcupsd[11573]: apcupsd exiting, signal 15

Aug 28 04:40:01 Node apcupsd[11573]: apcupsd shutdown succeeded

Aug 28 04:40:04 Node apcupsd[3015]: apcupsd 3.14.13 (02 February 2015) slackware startup succeeded

 

but I was using Sonarr long after that and it was all working fine.

 

Here is my K00 script:

#! /bin/bash
umount /mnt/cache/.watch/tv-remote
umount /mnt/cache/.watch/movies
umount /mnt/cache/.watch/music-remote

 

and my S00 script:

#! /bin/bash
sshfs [email protected]:private/deluge/data/couchpotato/ /mnt/cache/.watch/movies/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup
sshfs [email protected]:private/deluge/data/sonarr/ /mnt/cache/.watch/tv-remote/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup
sshfs [email protected]:private/deluge/data/headphones/ /mnt/cache/.watch/music-remote/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup

 

If anyone has any ideas on why this might have started all of a sudden I am all ears.

 

EDIT: Just lost another share about three minutes ago, going to test if my theory is correct by removing the K00 script and seeing if the shares then stay mounted. I renamed K00.sh to KXX.sh and ran the update script command so now to wait and see if my shares stay in place.

 

EDIT2: Alright six days since I removed the K00 script and not a single remote mount has been lost.

node-diagnostics-20160920-0936.zip

Link to comment

Hi Squid, not sure if you can help me with an issue I am having that has come back up now that powerdown is deprecated for v6.2.

 

In powerdown I had K and S scripts that would mount and remove remote share mounts when the array was stopped and started. Prior to powerdown v2.23 I had issues where my mounts would mysteriously disappear that were solved by simply disabling the K script. The changes dlandon made in v2.23 of his excellent script fixed my issues and my shares stayed mounted without any problems. Now that I have moved to unRAID v6.2 and started using your user scripts plugin my issue has returned. I am randomly losing my remote share mounts after only a day or so of having them mounted. The scripts are set to run on array start and array stop so they shouldn't be executing for any other reason and I don't see any evidence of them running randomly in the system log. I have attached my diagnostics zip to this post.

 

Here is a copy/paste of my post from the powerdown thread that shows what my scripts contain and some other details:

 

Wondering if someone can help me with a behaviour I noticed after setting up a K and S script to mount remote SSHFS shares for my automation setup. I have attached my system log, let

 

Before finding out I could use the powerdown plugin to run script files when the array starts/stops I was manually mounting my remote SSHFS shares each time I started the machine. The SSHFS shares would stay up without issues for weeks until I manually removed them. Now that I have setup a K00 and S00 script I have begun noticing that I randomly seem to be losing some of my SSHFS shares. On Saturday I noticed my automation setup was no longer finding recent downloads and upon checking what was mounted all three of my SSHFS mounts were gone. Then again this afternoon (8/28) I noticed Sonarr wasn't pulling in my downloads and it had just been working about four or five hours ago so I checked mount and sure enough my remote SSHFS share for tv downloads was missing.

 

Now I have no definitive proof that this is being caused by the powerdown plugin, I only have circumstantial guess work based on never having this issue until I created a K00 and S00 script and added them to the powerdown plugin following your instructions. For more information on SSHFS commands I am using check out my post over here: https://lime-technology.com/forum/index.php?topic=50974.msg489503#msg489503

 

I looked through the system log for today and I didn't see the powerdown script initiating the scripts like you normally see, for example:

Aug 25 16:04:26 Node rc.unRAID[13532][13537]: Processing /etc/rc.d/rc.unRAID.d/ start scripts.

Aug 25 16:04:26 Node rc.unRAID[13532][13541]: Running: "/etc/rc.d/rc.unRAID.d/S00.sh"

 

The only odd thing I noticed in the logs from today was apcupsd restarting around 4AM:

Aug 28 04:40:01 Node apcupsd[11573]: apcupsd exiting, signal 15

Aug 28 04:40:01 Node apcupsd[11573]: apcupsd shutdown succeeded

Aug 28 04:40:04 Node apcupsd[3015]: apcupsd 3.14.13 (02 February 2015) slackware startup succeeded

 

but I was using Sonarr long after that and it was all working fine.

 

Here is my K00 script:

#! /bin/bash
umount /mnt/cache/.watch/tv-remote
umount /mnt/cache/.watch/movies
umount /mnt/cache/.watch/music-remote

 

and my S00 script:

#! /bin/bash
sshfs [email protected]:private/deluge/data/couchpotato/ /mnt/cache/.watch/movies/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup
sshfs [email protected]:private/deluge/data/sonarr/ /mnt/cache/.watch/tv-remote/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup
sshfs [email protected]:private/deluge/data/headphones/ /mnt/cache/.watch/music-remote/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup

 

If anyone has any ideas on why this might have started all of a sudden I am all ears.

 

EDIT: Just lost another share about three minutes ago, going to test if my theory is correct by removing the K00 script and seeing if the shares then stay mounted. I renamed K00.sh to KXX.sh and ran the update script command so now to wait and see if my shares stay in place.

 

EDIT2: Alright six days since I removed the K00 script and not a single remote mount has been lost.

I'll ask dlandon about what changes powerdown 2.23 made that fixed this, but at first glance, I would think that it is coincidental.  Nothing in user scripts should cause a mount to disappear...
Link to comment

Thanks, I would tend to agree with you if not for my previous experience with powerdown. Once I disabled the script that had the umount commands to remove the shares they would stay up and never drop until I manually removed them.

 

The weird behavior makes me think there has to be some kind of signal unRAID is issuing that is tricking these plugins into thinking that the array is being stopped or something like that (of course I am just guessing).

Link to comment

Thanks, I would tend to agree with you if not for my previous experience with powerdown. Once I disabled the script that had the umount commands to remove the shares they would stay up and never drop until I manually removed them.

 

The weird behavior makes me think there has to be some kind of signal unRAID is issuing that is tricking these plugins into thinking that the array is being stopped or something like that (of course I am just guessing).

Can't remember off the top of my head if I put the time of execution in the logging for the start / stop scripts (not at home at the moment).

 

But if the scripts logs available through user.scripts shows only a single execution, then its not unRaid.

Link to comment

Thanks, I would tend to agree with you if not for my previous experience with powerdown. Once I disabled the script that had the umount commands to remove the shares they would stay up and never drop until I manually removed them.

 

The weird behavior makes me think there has to be some kind of signal unRAID is issuing that is tricking these plugins into thinking that the array is being stopped or something like that (of course I am just guessing).

Can't remember off the top of my head if I put the time of execution in the logging for the start / stop scripts (not at home at the moment).

 

But if the scripts logs available through user.scripts shows only a single execution, then its not unRaid.

I didn't think to check the script logs, where do I view them? I only see a log icon next to my mount script?

 

Nevermind, found it. I don't even see a log having been generated for the unmount script so I guess the plugin hasn't run the script at all.

 

Like I said before I am just going off what I have experienced previously with the PowerDown plugin and when I disabled the K00 script prior to the v2.23 release I no longer had an issue.

 

I have disabled the schedule for the unmount script in user.scripts for now and I will see if they stay mounted...

Link to comment

For the start script, that's where you would get it...  In theory if there is nothing showing for the unmount one, then the script has never run since the last reboot, and therefore has never itself unmounted the shares

 

 

Diagnostics may also shed some light (and if you could point out a rough time the mount became unavailable it would be helpful)

Link to comment

For the start script, that's where you would get it...  In theory if there is nothing showing for the unmount one, then the script has never run since the last reboot, and therefore has never itself unmounted the shares

 

 

Diagnostics may also shed some light (and if you could point out a rough time the mount became unavailable it would be helpful)

The mounts became unavailable sometime in the last 24 hours, sorry I can't be more specific I haven't really checked on it much. The only reason I noticed is my couchpotato was pissed off because it couldn't find the remote share it uses. I have a diagnostic log attached to my OP in this thread.

Link to comment

Hi Squid, not sure if you can help me with an issue I am having that has come back up now that powerdown is deprecated for v6.2.

 

In powerdown I had K and S scripts that would mount and remove remote share mounts when the array was stopped and started. Prior to powerdown v2.23 I had issues where my mounts would mysteriously disappear that were solved by simply disabling the K script. The changes dlandon made in v2.23 of his excellent script fixed my issues and my shares stayed mounted without any problems. Now that I have moved to unRAID v6.2 and started using your user scripts plugin my issue has returned. I am randomly losing my remote share mounts after only a day or so of having them mounted. The scripts are set to run on array start and array stop so they shouldn't be executing for any other reason and I don't see any evidence of them running randomly in the system log. I have attached my diagnostics zip to this post.

 

Here is a copy/paste of my post from the powerdown thread that shows what my scripts contain and some other details:

 

Wondering if someone can help me with a behaviour I noticed after setting up a K and S script to mount remote SSHFS shares for my automation setup. I have attached my system log, let

 

Before finding out I could use the powerdown plugin to run script files when the array starts/stops I was manually mounting my remote SSHFS shares each time I started the machine. The SSHFS shares would stay up without issues for weeks until I manually removed them. Now that I have setup a K00 and S00 script I have begun noticing that I randomly seem to be losing some of my SSHFS shares. On Saturday I noticed my automation setup was no longer finding recent downloads and upon checking what was mounted all three of my SSHFS mounts were gone. Then again this afternoon (8/28) I noticed Sonarr wasn't pulling in my downloads and it had just been working about four or five hours ago so I checked mount and sure enough my remote SSHFS share for tv downloads was missing.

 

Now I have no definitive proof that this is being caused by the powerdown plugin, I only have circumstantial guess work based on never having this issue until I created a K00 and S00 script and added them to the powerdown plugin following your instructions. For more information on SSHFS commands I am using check out my post over here: https://lime-technology.com/forum/index.php?topic=50974.msg489503#msg489503

 

I looked through the system log for today and I didn't see the powerdown script initiating the scripts like you normally see, for example:

Aug 25 16:04:26 Node rc.unRAID[13532][13537]: Processing /etc/rc.d/rc.unRAID.d/ start scripts.

Aug 25 16:04:26 Node rc.unRAID[13532][13541]: Running: "/etc/rc.d/rc.unRAID.d/S00.sh"

 

The only odd thing I noticed in the logs from today was apcupsd restarting around 4AM:

Aug 28 04:40:01 Node apcupsd[11573]: apcupsd exiting, signal 15

Aug 28 04:40:01 Node apcupsd[11573]: apcupsd shutdown succeeded

Aug 28 04:40:04 Node apcupsd[3015]: apcupsd 3.14.13 (02 February 2015) slackware startup succeeded

 

but I was using Sonarr long after that and it was all working fine.

 

Here is my K00 script:

#! /bin/bash
umount /mnt/cache/.watch/tv-remote
umount /mnt/cache/.watch/movies
umount /mnt/cache/.watch/music-remote

 

and my S00 script:

#! /bin/bash
sshfs [email protected]:private/deluge/data/couchpotato/ /mnt/cache/.watch/movies/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup
sshfs [email protected]:private/deluge/data/sonarr/ /mnt/cache/.watch/tv-remote/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup
sshfs [email protected]:private/deluge/data/headphones/ /mnt/cache/.watch/music-remote/ -o StrictHostKeyChecking=no -o allow_other -o Ciphers=arcfour -o Compression=no -o IdentityFile=/mnt/cache/.watch/PolyphemusAutomationSetup

 

If anyone has any ideas on why this might have started all of a sudden I am all ears.

 

EDIT: Just lost another share about three minutes ago, going to test if my theory is correct by removing the K00 script and seeing if the shares then stay mounted. I renamed K00.sh to KXX.sh and ran the update script command so now to wait and see if my shares stay in place.

 

EDIT2: Alright six days since I removed the K00 script and not a single remote mount has been lost.

 

You have some things going on in your syslog that don't look good:

Sep 18 10:41:42 Node kernel: python[17518]: segfault at 2b043cd61ff8 ip 00002b0438283d9b sp 00002b043cd62000 error 6 in libpython2.7.so.1.0[2b0438214000+341000]
Sep 18 10:41:44 Node kernel: python[10080]: segfault at 2afc522c9ff8 ip 00002afc4a04fd9b sp 00002afc522ca000 error 6 in libpython2.7.so.1.0[2afc49fe0000+341000]
Sep 18 10:41:45 Node kernel: python[10132]: segfault at 2b4ffcbecff8 ip 00002b4ff494f57f sp 00002b4ffcbed000 error 6 in libpython2.7.so.1.0[2b4ff48ed000+341000]
Sep 18 10:41:47 Node kernel: python[10207]: segfault at 2ac0a7019ff8 ip 00002ac09ed9fd9b sp 00002ac0a701a000 error 6 in libpython2.7.so.1.0[2ac09ed30000+341000]
Sep 18 10:41:48 Node kernel: python[10268]: segfault at 2afc2b390ff8 ip 00002afc2310957f sp 00002afc2b391000 error 6 in libpython2.7.so.1.0[2afc230a7000+341000]

 

and

 

Sep 19 22:30:42 Node kernel: ------------[ cut here ]------------
Sep 19 22:30:42 Node kernel: WARNING: CPU: 0 PID: 27309 at ./arch/x86/include/asm/thread_info.h:236 SyS_rt_sigsuspend+0x8f/0x9e()
Sep 19 22:30:42 Node kernel: Modules linked in: xt_nat veth xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables vhost_net tun vhost macvtap macvlan ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod x86_pkg_temp_thermal coretemp kvm_intel kvm r8169 i2c_i801 i2c_core ahci libahci mii
Sep 19 22:30:42 Node kernel: CPU: 0 PID: 27309 Comm: Threadpool work Not tainted 4.4.19-unRAID #1
Sep 19 22:30:42 Node kernel: Hardware name: Gigabyte Technology Co., Ltd. B85M-DS3H-A/B85M-DS3H-A, BIOS F2 08/10/2015
Sep 19 22:30:42 Node kernel: 0000000000000000 ffff8803c77ebee0 ffffffff8136a68e 0000000000000000
Sep 19 22:30:42 Node kernel: 00000000000000ec ffff8803c77ebf18 ffffffff8104a39a ffffffff81055502
Sep 19 22:30:42 Node kernel: fffffffffffffdfe 0000000000014f98 000000000000000d 000000000000a0b7
Sep 19 22:30:42 Node kernel: Call Trace:
Sep 19 22:30:42 Node kernel: [<ffffffff8136a68e>] dump_stack+0x61/0x7e
Sep 19 22:30:42 Node kernel: [<ffffffff8104a39a>] warn_slowpath_common+0x8f/0xa8
Sep 19 22:30:42 Node kernel: [<ffffffff81055502>] ? SyS_rt_sigsuspend+0x8f/0x9e
Sep 19 22:30:42 Node kernel: [<ffffffff8104a457>] warn_slowpath_null+0x15/0x17
Sep 19 22:30:42 Node kernel: [<ffffffff81055502>] SyS_rt_sigsuspend+0x8f/0x9e
Sep 19 22:30:42 Node kernel: [<ffffffff81620a2e>] entry_SYSCALL_64_fastpath+0x12/0x6d
Sep 19 22:30:42 Node kernel: ---[ end trace 753ae045f3fb133e ]---

 

I'mm not very good at these things, but python is segfaulting and it appears your cpu is overheating.

Link to comment

I noticed those in the system log this morning when I was trying to troubleshoot this. They python error took place right around the time I was upgrading from 6.19 to 6.2 and I haven't seen it pop up since (I upgraded 9/18 around 9-10AM)

 

I have never seen the CPU temp spike over 45C and utilization is generally pretty low (less than 10%) unless plex is transcoding heavily. What part of those messages makes you think the CPU is over heating?

 

EDIT: Well apparently when I setup the Dynamix CPU temp plugin I picked the wrong sensor, when plex is really hammering the CPU it gets up to around 58-60C but that is still well below the thermal throttling threshold for the processor.

 

The UnRAID case sits in an air conditioned server room so besides cranking up the fans manually on the fan controller my only other options to address a CPU temp issue would be to buy an aftermarket CPU cooler.

Link to comment

Another possible reason.  And I'm going to emphasize possible.

 

First off are the mounts unavailable from the command line when cp starts bitching?

 

This could be a slave thing.  Try mounting them into /mnt/disks/whatever (the only location that works with slave modes) adjust the templates to the new mount and set the mounting mode as rw:slave

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

I noticed those in the system log this morning when I was trying to troubleshoot this. They python error took place right around the time I was upgrading from 6.19 to 6.2 and I haven't seen it pop up since (I upgraded 9/18 around 9-10AM)

 

I have never seen the CPU temp spike over 45C and utilization is generally pretty low (less than 10%) unless plex is transcoding heavily. What part of those messages makes you think the CPU is over heating?

 

Sep 19 22:30:42 Node kernel: ------------[ cut here ]------------
Sep 19 22:30:42 Node kernel: WARNING: CPU: 0 PID: 27309 at ./arch/x86/include/asm/thread_info.h:236 SyS_rt_sigsuspend+0x8f/0x9e()
Sep 19 22:30:42 Node kernel: Modules linked in: xt_nat veth xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables vhost_net tun vhost macvtap macvlan ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod x86_pkg_temp_thermal coretemp kvm_intel kvm r8169 i2c_i801 i2c_core ahci libahci mii
Sep 19 22:30:42 Node kernel: CPU: 0 PID: 27309 Comm: Threadpool work Not tainted 4.4.19-unRAID #1
Sep 19 22:30:42 Node kernel: Hardware name: Gigabyte Technology Co., Ltd. B85M-DS3H-A/B85M-DS3H-A, BIOS F2 08/10/2015
Sep 19 22:30:42 Node kernel: 0000000000000000 ffff8803c77ebee0 ffffffff8136a68e 0000000000000000
Sep 19 22:30:42 Node kernel: 00000000000000ec ffff8803c77ebf18 ffffffff8104a39a ffffffff81055502
Sep 19 22:30:42 Node kernel: fffffffffffffdfe 0000000000014f98 000000000000000d 000000000000a0b7
Sep 19 22:30:42 Node kernel: Call Trace:
Sep 19 22:30:42 Node kernel: [<ffffffff8136a68e>] dump_stack+0x61/0x7e
Sep 19 22:30:42 Node kernel: [<ffffffff8104a39a>] warn_slowpath_common+0x8f/0xa8
Sep 19 22:30:42 Node kernel: [<ffffffff81055502>] ? SyS_rt_sigsuspend+0x8f/0x9e
Sep 19 22:30:42 Node kernel: [<ffffffff8104a457>] warn_slowpath_null+0x15/0x17
Sep 19 22:30:42 Node kernel: [<ffffffff81055502>] SyS_rt_sigsuspend+0x8f/0x9e
Sep 19 22:30:42 Node kernel: [<ffffffff81620a2e>] entry_SYSCALL_64_fastpath+0x12/0x6d
Sep 19 22:30:42 Node kernel: ---[ end trace 753ae045f3fb133e ]---

 

This:  x86_pkg_temp_thermal coretemp kvm_intel kvm r8169 i2c_i801 i2c_core ahci libahci mii

 

But I don't know that much about these faults.  Reardless, I don't think is is good.

Link to comment

I noticed those in the system log this morning when I was trying to troubleshoot this. They python error took place right around the time I was upgrading from 6.19 to 6.2 and I haven't seen it pop up since (I upgraded 9/18 around 9-10AM)

 

I have never seen the CPU temp spike over 45C and utilization is generally pretty low (less than 10%) unless plex is transcoding heavily. What part of those messages makes you think the CPU is over heating?

 

Sep 19 22:30:42 Node kernel: ------------[ cut here ]------------
Sep 19 22:30:42 Node kernel: WARNING: CPU: 0 PID: 27309 at ./arch/x86/include/asm/thread_info.h:236 SyS_rt_sigsuspend+0x8f/0x9e()
Sep 19 22:30:42 Node kernel: Modules linked in: xt_nat veth xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables vhost_net tun vhost macvtap macvlan ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod x86_pkg_temp_thermal coretemp kvm_intel kvm r8169 i2c_i801 i2c_core ahci libahci mii
Sep 19 22:30:42 Node kernel: CPU: 0 PID: 27309 Comm: Threadpool work Not tainted 4.4.19-unRAID #1
Sep 19 22:30:42 Node kernel: Hardware name: Gigabyte Technology Co., Ltd. B85M-DS3H-A/B85M-DS3H-A, BIOS F2 08/10/2015
Sep 19 22:30:42 Node kernel: 0000000000000000 ffff8803c77ebee0 ffffffff8136a68e 0000000000000000
Sep 19 22:30:42 Node kernel: 00000000000000ec ffff8803c77ebf18 ffffffff8104a39a ffffffff81055502
Sep 19 22:30:42 Node kernel: fffffffffffffdfe 0000000000014f98 000000000000000d 000000000000a0b7
Sep 19 22:30:42 Node kernel: Call Trace:
Sep 19 22:30:42 Node kernel: [<ffffffff8136a68e>] dump_stack+0x61/0x7e
Sep 19 22:30:42 Node kernel: [<ffffffff8104a39a>] warn_slowpath_common+0x8f/0xa8
Sep 19 22:30:42 Node kernel: [<ffffffff81055502>] ? SyS_rt_sigsuspend+0x8f/0x9e
Sep 19 22:30:42 Node kernel: [<ffffffff8104a457>] warn_slowpath_null+0x15/0x17
Sep 19 22:30:42 Node kernel: [<ffffffff81055502>] SyS_rt_sigsuspend+0x8f/0x9e
Sep 19 22:30:42 Node kernel: [<ffffffff81620a2e>] entry_SYSCALL_64_fastpath+0x12/0x6d
Sep 19 22:30:42 Node kernel: ---[ end trace 753ae045f3fb133e ]---

 

This:  x86_pkg_temp_thermal coretemp kvm_intel kvm r8169 i2c_i801 i2c_core ahci libahci mii

 

But I don't know that much about these faults.  Reardless, I don't think is is good.

I agree I never like seeing those messages but so far I haven't been able to find any information on what exactly that kernel error means.

Link to comment

As a quick test, put the following in your unmount script so you can see if it is being executed when you don't expect it.

 

logger "My unmount script is running..."

Ok, so that will print that line to the logger daemon so we can see it in the syslog? I have added the line to the beginning of the unmount script.

 

I will move those system log errors to a separate support thread and see if anyone can shed some light on them. If it is indeed a heat issue I will have to talk to my boss and see if he will let me turn down the A/C in the server room or I will have to order an after market CPU cooler.

Link to comment

As a quick test, put the following in your unmount script so you can see if it is being executed when you don't expect it.

 

logger "My unmount script is running..."

Ok, so that will print that line to the logger daemon so we can see it in the syslog?

 

Yes.  That will tell you if the unmount script is what's causing the unmount.  If you see it in the log, save the log and post it so we can see if the shares are unmounting at the wrong time.

Link to comment

Great plugin thanks for providing it. I have a question related to the scheduler. Background: I am planning to schedule a clean shutdown every night at 0:30:

#!/bin/bash
#
# Shutdown OSX VM
echo "...shutting down OSX first...."
25 0 * * * ssh [email protected] sudo shutdown -h now
#
# Shutdown unRAID
echo "...finally shutting down unRAID...."
30 0 * * * /usr/local/sbin/powerdown

 

I have activated the script at boot time but I am getting error messages:

...shutting down OSX first....                                                                                                      
/tmp/user.scripts/tmpScripts/daily_shutdown/script: line 5: 25: command not found                                                   
...finally shutting down unRAID....                                                                                                 
/tmp/user.scripts/tmpScripts/daily_shutdown/script: line 9: 30: command not found                                                   
Script Finished Fri, 23 Sep 2016 11:08:59 +0200 

 

These are the lines starting with cron definitions

Link to comment

Actually rather simple, since those lines won't execute from a straight command line either.

 

I'm sure there's a million ways of doing it, such as adding the cron via crontab, or using the AT command, etc.

 

It threw me for a loop when I looked at the post this morning since I thought the problem was with user.scripts, and not with your script itself.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.