Pducharme Posted February 5, 2017 Share Posted February 5, 2017 Hi, I see from other threads that it was an issue a while ago that was resolved, but in my case, it started to happen only now. I'm on the new UnRAID 6.3 release couple of days ago (the final build). I didn't try any of the beta of 6.3, so it's not a leftover of a Release Candidate. The email subject is : cron for user root /usr/bin/run-parts /etc/cron.daily 1> /dev/null The body of the message contains : error: Ignoring tor because of bad file mode - must be 0644 or 0444. Quote Link to comment
Squid Posted February 5, 2017 Share Posted February 5, 2017 Hi, I see from other threads that it was an issue a while ago that was resolved, but in my case, it started to happen only now. I'm on the new UnRAID 6.3 release couple of days ago (the final build). I didn't try any of the beta of 6.3, so it's not a leftover of a Release Candidate. The email subject is : cron for user root /usr/bin/run-parts /etc/cron.daily 1> /dev/null The body of the message contains : error: Ignoring tor because of bad file mode - must be 0644 or 0444. 100% from preclear. Uninstall & reinstall tends to fix 1 Quote Link to comment
Pducharme Posted February 5, 2017 Author Share Posted February 5, 2017 thanks, i'll try this and report tomorrow (happens only once a day) Quote Link to comment
JohanSF Posted December 21, 2018 Share Posted December 21, 2018 Seemingly out of nowhere I also started receiving such an email on a daily basis now. Found this thread from search. I have run the Docker Safe New Perms tool and I have not had the preclear plugin installed. I'm on unraid 6.6.6 Quote Link to comment
Pducharme Posted January 5, 2019 Author Share Posted January 5, 2019 Funny that I'm back in my own old topic looking for a fix. I just get now the same errors than @JohanSF here. Multiple lines of errors : error: skipping "/var/log/apcupsd.events" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation. error: skipping "/var/log/docker.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation. error: skipping "/var/log/syslog" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation. error: skipping "/var/log/vsftpd.log" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation. error: skipping "/var/log/wtmp" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation. error: skipping "/var/log/btmp" because parent directory has insecure permissions (It's world writable or writable by group which is not "root") Set "su" directive in config file to tell logrotate which user/group should be used for rotation. Didn't change anything, seems to have started 1 or 2 weeks ago. Quote Link to comment
dgwharrison Posted May 16, 2019 Share Posted May 16, 2019 I started getting this email when I updated to 6.7. However, the contents of my email is: HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device All of my drives are correctly identified. So I guess it might be what @Squid said about preclear disk plugin, I've removed it and reinstalled it, we'll see if the same email comes tomorrow! Quote Link to comment
Squid Posted May 16, 2019 Share Posted May 16, 2019 10 minutes ago, dgwharrison said: So I guess it might be what @Squid said about preclear disk plugin, Hard to compare problems between a post from 2 years ago and today. You should post your diagnostics. 1 Quote Link to comment
dgwharrison Posted May 17, 2019 Share Posted May 17, 2019 Hi @Squid, yes it wasn't preclear or if it was uninstalling / reinstalling didn't work. I received the message again today. Diagnostics attached. zeus-diagnostics-20190517-2324.zip Quote Link to comment
Squid Posted May 18, 2019 Share Posted May 18, 2019 While your exact message isn't in the syslog, a similar one is there for when you nvme attempts to go to standby. Disable that on disk settings. Beyond that, disk 8 has multiple read errors. All corrected, so it never got disabled. Trouble is that because this all happened during a rebuild of disk And, I'm not the one to help on why one of your unassigned drives won't mount May 17 09:59:35 Zeus unassigned.devices: Mount drive command: /sbin/mount -t xfs -o rw,noatime,nodiratime '/dev/sdj1' '/mnt/disks/WDC_WD30EZRS-00J99B0_WD-WCAWZ0143001' May 17 09:59:35 Zeus kernel: XFS (sdj1): Filesystem has duplicate UUID 94397054-2b25-409f-89c2-5901afb836b7 - can't mount May 17 09:59:35 Zeus unassigned.devices: Mount of '/dev/sdj1' failed. Error message: mount: /mnt/disks/WDC_WD30EZRS-00J99B0_WD-WCAWZ0143001: wrong fs type, bad option, bad superblock on /dev/sdj1, missing codepage or helper program, or other error. May 17 09:59:35 Zeus unassigned.devices: Partition 'WDC_WD30EZRS-00J99B0_WD-WCAWZ0143001' could not be mounted... 1 Quote Link to comment
dgwharrison Posted May 18, 2019 Share Posted May 18, 2019 23 hours ago, Squid said: While your exact message isn't in the syslog, a similar one is there for when you nvme attempts to go to standby. Disable that on disk settings. Beyond that, disk 8 has multiple read errors. All corrected, so it never got disabled. Trouble is that because this all happened during a rebuild of disk And, I'm not the one to help on why one of your unassigned drives won't mount May 17 09:59:35 Zeus unassigned.devices: Mount drive command: /sbin/mount -t xfs -o rw,noatime,nodiratime '/dev/sdj1' '/mnt/disks/WDC_WD30EZRS-00J99B0_WD-WCAWZ0143001' May 17 09:59:35 Zeus kernel: XFS (sdj1): Filesystem has duplicate UUID 94397054-2b25-409f-89c2-5901afb836b7 - can't mount May 17 09:59:35 Zeus unassigned.devices: Mount of '/dev/sdj1' failed. Error message: mount: /mnt/disks/WDC_WD30EZRS-00J99B0_WD-WCAWZ0143001: wrong fs type, bad option, bad superblock on /dev/sdj1, missing codepage or helper program, or other error. May 17 09:59:35 Zeus unassigned.devices: Partition 'WDC_WD30EZRS-00J99B0_WD-WCAWZ0143001' could not be mounted... Thanks Squid, I'll try disabling spin down on the cache. I'm not worried about the disk that won't mount, it's on the way out and why the array was rebuilding. I'll also spin up in maintenance mode and check the file systems. Quote Link to comment
dgwharrison Posted May 21, 2019 Share Posted May 21, 2019 So I tried setting the cache NVMEs to not spin down, I've uninstalled and reinstalled preclear disk, I've checked the file system of each disk in maintenance mode without the -n flag, I'm at a loss for what to do next. I still get the email every day. I figure it probably isn't the disks because I upgraded to 6.7 from 6.6.7 before I replaced disk 7 and it was after the 6.7 upgrade that the message started coming. What should I do next? I've attached a new diagnostics file. zeus-diagnostics-20190521-0023.zip Quote Link to comment
JorgeB Posted May 21, 2019 Share Posted May 21, 2019 6 hours ago, dgwharrison said: What should I do next? You should replace the failing disk8. Quote Link to comment
dgwharrison Posted June 4, 2019 Share Posted June 4, 2019 On 5/21/2019 at 4:42 PM, johnnie.black said: You should replace the failing disk8. HI @johnnie.black, I removed that disk, rebuilt the array, but still get the email. Any other ideas? Quote Link to comment
JorgeB Posted June 4, 2019 Share Posted June 4, 2019 54 minutes ago, dgwharrison said: but still get the email. The error is unrelated to the email, and no idea waht's causing that, I only meant you should always replace a failing disk. Quote Link to comment
magiin83 Posted May 27, 2020 Share Posted May 27, 2020 Old thread but still relevant. I'm on 6.5.0. Randomly saw the other day that there was an update for cache_dirs available, which I thought was strange since it didn't look like there was really an update. Update log still showed the last change at "2018.12.04 - Merged updates from Alex R. Berg" Never the less, I clicked up date, and now I've started also getting the etc/cron/daily email error with the following: cron for user root /usr/bin/run-parts /etc/cron.daily 1> /dev/null error: Ignoring cache_dirs because of bad file mode - must be 0644 or 0444. Now considering there wasn't even an update for close to 2 years.. what could have actually changed in terms of the files? Also what's this even referring to that requires a permission change? Any help would be much appreciated. Quote Link to comment
Squid Posted May 27, 2020 Share Posted May 27, 2020 2 hours ago, magiin83 said: "2018.12.04 - Merged updates from Alex R. Berg" You're on a "fork" of cache_dirs. Uninstall it then cache dirs from the Apps Tab. Quote Link to comment
magiin83 Posted May 27, 2020 Share Posted May 27, 2020 59 minutes ago, Squid said: You're on a "fork" of cache_dirs. Uninstall it then cache dirs from the Apps Tab. I never would have even thought to think that... thank you very much. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.