DontWorryScro

Members
  • Posts

    143
  • Joined

  • Last visited

Recent Profile Visitors

1646 profile views

DontWorryScro's Achievements

Apprentice

Apprentice (3/14)

14

Reputation

2

Community Answers

  1. Oh it'd sure be amazing to do that turn on -> backup -> turn back to off move. Is there a guide on that anywhere? Also what container update in regards to the root vs luckybackup directory? The container is running on my source machine. I dont have a LuckyBackup docker container running on my destination machine where the id_rsa is kept. I must be brain farting on this somewhere along the line. Here is a screencap of my docker container config
  2. im not as adept as others. Can you tell me how to take the id_rsa from the root dir that was explained in the guide and copy/move it to where ever this is? And how to adjust the path in luckybackup to find the new location I rewatched the video and any instance of typing root in cli i replaced with luckybackup but all things being equal, in the end, i just saw this when i navigated to the newly created luckybackup directory for the key. Looks to be empty inside. what i do wrong? I was able to rightclick in the empty space to give me the option to show hidden files which did in fact reveal to me the id_rsa files i had created. but selecting them and attempting the backup popped some weird password request window in the VNC.
  3. I followed the video guide posted at the top of this thread and it had the docker container running on the source machine. Is a backup machine meant to be pulling from the source machine instead of the source machine pushing to the destination machine? Does it matter? Also why synchronize instead of backup?
  4. this is really becoming too much a pain to me to be worth it. i regret finally jumping from 6.11.5 which ran flawless for me and I will most likely be going back.
  5. Any developments? I had gone straight to deleting that plugin to test and sure enough the errors kept popping for me. So for me it has to be something else.
  6. unraid runs normally in safe mode? does it disable all your plugins? anything else? docker containers still operate ok?
  7. Following. Just curious. Say you boot into Safe mode. then what? what is the troubleshooting process?
  8. so we have no insight on this at all still while it seems a ton of people are experiencing this? i think im going to revert back to 6.11.5 that version ran flawless for me and i dont need zfs. only thing is community apps seems to not be supported before 6.12 anymore which seems sort of ridiculous.
  9. I too have been getting this. This happens while my Plex is idle, even. Nothing is being watched. Ive deleted EasyAudioDecoder before due to some suggestion i stumbled on before but clearly that hasnt worked. I suspect it just gets redownloaded on the next restart or update. Whats the answer here? For the sake of thoroughness I'll also include the transcode settings i have in Unraid and in Plex to confirm that's not the issue: Unraid lsio Plex containter Plex
  10. also noticed this in my logs today. I'm on 6.11.5 Still nothing ey? php-fpm[14714]: [WARNING] [pool www] server reached max_children setting (50), consider raising it
  11. this? root@Snuts:~# cat /etc/cron.d/root # Generated docker monitoring schedule: 10 0 * * * /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate check &> /dev/null # Generated system monitoring schedule: */1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null # Generated mover schedule: 40 1 * * * /usr/local/sbin/mover |& logger # Generated parity check schedule: 0 3 * 2,5,8,11 0 [[ $(date +%e) -le 7 ]] && /usr/local/sbin/mdcmd check &> /dev/null || : # Generated plugins version check schedule: 10 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null # Generated btrfs scrub cache nvme schedule: 25 2 7 * * /usr/local/emhttp/plugins/dynamix/scripts/btrfs_scrub start /mnt/cache_nvme -r &> /dev/null # Generated Unraid OS update check schedule: 11 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/unraidcheck &> /dev/null # Generated Schedule for CA Auto Turbo Mode 0 0 * * * /usr/local/emhttp/plugins/ca.turbo/scripts/turboSchedule.php disable 420 > /dev/null 2>&1 # Generated cron settings for docker autoupdates 10 2 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateDocker.php >/dev/null 2>&1 # Generated cron settings for plugin autoupdates 5 0 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplications.php >/dev/null 2>&1 # CRON for CA background scanning of applications 35 * * * * php /usr/local/emhttp/plugins/community.applications/scripts/notices.php > /dev/null 2>&1 # Generated system data collection schedule: */1 * * * * /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &> /dev/null # Generated schedules for parity.check.tuning */6 * * * * /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null # Purge recycle bin at 5:00 AM on the first day of the week: 0 5 * * 0 /usr/local/emhttp/plugins/recycle.bin/scripts/rc.recycle.bin cron &> /dev/null # Refresh Recycle Bin trash sizes every five minutes: */5 * * * * /usr/local/emhttp/plugins/recycle.bin/scripts/get_trashsizes &> /dev/null # Generated cron schedule for user.scripts */10 * * * * /usr/local/emhttp/plugins/user.scripts/startCustom.php /boot/config/plugins/user.scripts/scripts/chown_soulseek_complete/script > /dev/null 2>&1 0 */1 * * * /usr/local/emhttp/plugins/user.scripts/startCustom.php /boot/config/plugins/user.scripts/scripts/delete media that is stuck/script > /dev/null 2>&1
  12. ok with Testing mode on the failure to resume the data rebuild occurred again over night. The rebuild is still paused as I type this, even over 7 hours after the mover operation finished. Diagnostics attached. Edit: I also just now noticed after posting this that there was a Parity Check Tuning plugin update available to me now. I've gone ahead and updated. I also manually restarted the Parity-Sync. 2nd Edit: Shortly there after I notification popped up from Parity Check Tuning at 12:42 that "[SNUTS] mover no longer running" I'll just paste in the additional information that can be found from the logs after I manually resumed the parity-sync: Jul 4 12:40:03 Snuts kernel: mdcmd (41): check resume Jul 4 12:40:03 Snuts kernel: md: recovery thread: recon P ... Jul 4 12:40:15 Snuts Parity Check Tuning: TESTING: ... not a parity check so always treat it as an automatic operation Jul 4 12:40:15 Snuts Parity Check Tuning: TESTING: actionDescription(recon P, 1, AUTOMATIC, 1) = Parity Sync/Data Rebuild Jul 4 12:40:15 Snuts Parity Check Tuning: DEBUG: Parity Sync/Data Rebuild running Jul 4 12:40:22 Snuts root: plugin: running: anonymous Jul 4 12:40:22 Snuts root: plugin: creating: /boot/config/plugins/parity.check.tuning/parity.check.tuning-2023.07.04.txz - downloading from URL https://raw.githubusercontent.com/itimpi/parity.check.tuning/master/archives/parity.check.tuning-2023.07.04.txz Jul 4 12:40:22 Snuts root: plugin: running: /boot/config/plugins/parity.check.tuning/parity.check.tuning-2023.07.04.txz Jul 4 12:40:22 Snuts root: plugin: running: anonymous Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING: ... not a parity check so always treat it as an automatic operation Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING: actionDescription(recon P, 1, AUTOMATIC, 1) = Parity Sync/Data Rebuild Jul 4 12:40:22 Snuts Parity Check Tuning: DEBUG: Parity Sync/Data Rebuild running Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING:CONFIG ----------- CONFIG begin ------ Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING:CONFIG /boot/config/forcesync marker file present Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING:CONFIG tidy marker file present Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING:CONFIG progress marker file present Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING:CONFIG automatic marker file present Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING:CONFIG paused marker file present Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING:CONFIG disks marker file present Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING:CONFIG mover marker file present Jul 4 12:40:22 Snuts Parity Check Tuning: Versions: Unraid 6.11.5, Plugin Version: 2023.06.25 Jul 4 12:40:22 Snuts Parity Check Tuning: Configuration: Array#012(#012 [parityTuningScheduled] => 0#012 [parityTuningManual] => 0#012 [parityTuningAutomatic] => 0#012 [parityTuningFrequency] => 0#012 [parityTuningResumeCustom] => 5,15,25,35,45,55 * * * *#012 [parityTuningResumeDay] => 0#012 [parityTuningResumeHour] => 1#012 [parityTuningResumeMinute] => 15#012 [parityTuningPauseCustom] => 0,10,20,30,40,50 * * * *#012 [parityTuningPauseDay] => 0#012 [parityTuningPauseHour] => 7#012 [parityTuningPauseMinute] => 30#012 [parityTuningNotify] => 1#012 [parityTuningRecon] => 0#012 [parityTuningClear] => 0#012 [parityTuningRestart] => 1#012 [parityTuningMover] => 1#012 [parityTuningBackup] => 1#012 [parityTuningHeat] => 0#012 [parityTuningHeatHigh] => 3#012 [parityTuningHeatLow] => 8#012 [parityTuningHeatNotify] => 1#012 [parityTuningHeatShutdown] => 0#012 [parityTuningHeatCritical] => 2#012 [parityTuningHeatTooLong] => 30#012 [parityTuningLogging] => 2#012 [parityTuningLogTarget] => 0#012 [parityTuningMonitorDefault] => 17#012 [parityTuningMonitorHeat] => 7#012 Jul 4 12:40:22 Snuts Parity Check Tuning: TESTING:CONFIG Creating required cron entries Jul 4 12:40:22 Snuts Parity Check Tuning: DEBUG: Created cron entry for 6 minute interval monitoring Jul 4 12:40:23 Snuts Parity Check Tuning: DEBUG: Updated cron settings are in /boot/config/plugins/parity.check.tuning/parity.check.tuning.cron Jul 4 12:40:23 Snuts Parity Check Tuning: TESTING:CONFIG ----------- CONFIG end ------ Jul 4 12:40:23 Snuts root: plugin: parity.check.tuning.plg updated Jul 4 12:40:30 Snuts Parity Check Tuning: TESTING: ... not a parity check so always treat it as an automatic operation Jul 4 12:40:30 Snuts Parity Check Tuning: TESTING: actionDescription(recon P, 1, AUTOMATIC, 1) = Parity Sync/Data Rebuild Jul 4 12:40:30 Snuts Parity Check Tuning: DEBUG: Parity Sync/Data Rebuild running Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING: ... not a parity check so always treat it as an automatic operation Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING: actionDescription(recon P, 1, AUTOMATIC, 1) = Parity Sync/Data Rebuild Jul 4 12:42:24 Snuts Parity Check Tuning: DEBUG: Parity Sync/Data Rebuild running Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR ----------- MONITOR begin ------ Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR /boot/config/forcesync marker file present Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR tidy marker file present Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR progress marker file present Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR automatic marker file present Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR paused marker file present Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR disks marker file present Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR mover marker file present Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR isArrayOperationActive - parityTuningActive:1, parityTuningPos:2622003844 Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR ... not a parity check so always treat it as an automatic operation Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR backGroundTaskHandling: configName=parityTuningMover, value=1, isMoverRunning=0, Array: Active=1, Paused=0 Jul 4 12:42:24 Snuts Parity Check Tuning: Send notification: mover no longer running: Jul 4 12:42:24 Snuts Parity Check Tuning: TESTING:MONITOR ... using /usr/local/emhttp/webGui/scripts/notify -e Parity Check Tuning -i normal -l /Settings/Scheduler -s [SNUTS] mover no longer running Jul 4 12:42:25 Snuts Parity Check Tuning: TESTING:MONITOR Deleted mover marker file Jul 4 12:42:25 Snuts Parity Check Tuning: TESTING:MONITOR Deleted paused marker file Jul 4 12:42:25 Snuts Parity Check Tuning: TESTING:MONITOR ... no action required as array operation not paused Jul 4 12:42:25 Snuts Parity Check Tuning: TESTING:MONITOR backGroundTaskHandling: configName=parityTuningBackup, value=1, isBackupRunning=0, Array: Active=1, Paused=0 Jul 4 12:42:25 Snuts Parity Check Tuning: TESTING:MONITOR Temperature monitoring switched off Jul 4 12:42:25 Snuts Parity Check Tuning: TESTING:MONITOR ----------- MONITOR end ------ Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING: ... not a parity check so always treat it as an automatic operation Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING: actionDescription(recon P, 1, AUTOMATIC, 1) = Parity Sync/Data Rebuild Jul 4 12:48:26 Snuts Parity Check Tuning: DEBUG: Parity Sync/Data Rebuild running Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR ----------- MONITOR begin ------ Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR /boot/config/forcesync marker file present Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR tidy marker file present Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR progress marker file present Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR automatic marker file present Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR disks marker file present Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR isArrayOperationActive - parityTuningActive:1, parityTuningPos:2642689172 Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR ... not a parity check so always treat it as an automatic operation Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR backGroundTaskHandling: configName=parityTuningMover, value=1, isMoverRunning=0, Array: Active=1, Paused=0 Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR backGroundTaskHandling: configName=parityTuningBackup, value=1, isBackupRunning=0, Array: Active=1, Paused=0 Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR Temperature monitoring switched off Jul 4 12:48:26 Snuts Parity Check Tuning: TESTING:MONITOR ----------- MONITOR end ------ snuts-diagnostics-20230704-1236.zip
  13. Turns out one of my parity drives fell off again last night so I replaced all the data and power cables to that cluster of drives and am rebuilding now. This means I will be able to test this again when Mover runs tonight. Ill activate the testing mode you mentioned to see if I can capture what is going on if it does in fact repeat itself.
  14. OK I managed to re-enable the parity drive referenced in my original post that had dropped off (sdi) and after 3 days of rebuilding and finishing without issue it was fine for a couple days. Today I checked on the server to find that same drive had dropped off with i/o errors. Seems like this has been happening during scheduled mover operations overnight. Can you take a look at the latest diagnostics I've attached and tell me if it is exhibiting the same behavior as before? Do I need to just completely re-cable the sata power and mini SAS cables to my Unraid NAS to rule that out? The PSU is an EVGA SuperNOVA 1000 G+ 80 Plus Gold so I merely assume the power itself is probably adequate. snuts-diagnostics-20230703-1034.zip
  15. My rebuild has completed however I do have another drive that I have been meaning to swap in which will require another 3 day rebuild. If need be I am willing to take steps to help you troubleshoot what is going on. I manually made the fixes to the php file that were represented in the pull request by user robobub two posts before this. Not sure if that will have any effect but thought I'd let you know.