matt.shepker

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by matt.shepker

  1. Something unusual happened on this machine. It seemed to have gotten stuck at 90% so after 48 hours, I cancelled the test and ran smartctl -t select,0-max /dev/sdb instead. While that was running, I had a power blip (and UPS failure) and the machine rebooted. After reboot, the drives are reporting no errors. I'm not sure what to do at this point other than keep an eye on it. I have ordered a spare drive just in case.
  2. I'm not sure if I'm missing something, but the extended SMART test has been running for over 24 hours. This is the output as it stands at the moment. SMART-WD-WX62D60C9N5X.txt
  3. I have just rebuilt my array after I had a drive crater. I used the opportunity to somewhat start from scratch as I had some issues that I wanted to clear up. I noticed this morning the drive (sdb) that I put in there is reporting a single read error. It is a "new" drive in that this was a warranty replacement. I am guessing that it is probably a refurb, but I don't know. Any thoughts on what I should do with this? I don't have a spare drive on hand, so I'd need to get another one ordered if it is critical. Let me know your thoughts... sp-urhost01-diagnostics-20210927-0819.zip
  4. I have a script that I want to run when the VPN first connects to register whatever IP I get to a tracker that I use. When I add this to VPN_OPTIONS: --script-security 2 --up /config/openvpn/tun_up.sh I get this: 2021-04-17 09:01:44,949 DEBG 'start-script' stdout output: 2021-04-17 09:01:44 Multiple --up scripts defined. The previously configured script is overridden. And then things never connect. What would be the better option for doing this?
  5. @Josh.5, @Cpt. Chaz - I had ended up disabling everything, restarting, and choosing a smaller directory of files to process. It seems to have handled that just fine. Thanks!
  6. @Josh.5 - I was able to get everything up and running and it processed through around 1000 files without issue. Now it is sitting with a bunch of stuff in the queue and it won't kick off. Looking at the log, I see this: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 6807, in get return clone.execute(database)[0] File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 4226, in __getitem__ return self.row_cache[item] IndexError: list index out of range During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/foreman.py", line 423, in run next_item_to_process = self.task_queue.get_next_pending_tasks() File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/taskqueue.py", line 213, in get_next_pending_tasks task_item = self.fetch_next_task_filtered('pending', self.sort_by, self.sort_order) File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/taskqueue.py", line 155, in fetch_next_task_filtered next_task.read_and_set_task_by_absolute_path(task_item.abspath) File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/task.py", line 250, in read_and_set_task_by_absolute_path self.read_task_settings_from_db() File "/usr/local/lib/python3.6/dist-packages/unmanic/libs/task.py", line 226, in read_task_settings_from_db self.settings = self.task.settings.limit(1).get() File "/usr/local/lib/python3.6/dist-packages/peewee.py", line 6812, in get (clone.model, sql, params)) unmanic.libs.unmodels.tasksettings.TaskSettingsDoesNotExist: <Model: TaskSettings> instance matching query does not exist: SQL: SELECT "t1"."id", "t1"."task_id", "t1"."audio_codec", "t1"."audio_stream_encoder", "t1"."audio_codec_cloning", "t1"."audio_stream_encoder_cloning", "t1"."audio_stereo_stream_bitrate", "t1"."cache_path", "t1"."config_path", "t1"."keep_filename_history", "t1"."debugging", "t1"."enable_audio_encoding", "t1"."enable_audio_stream_transcoding", "t1"."enable_audio_stream_stereo_cloning", "t1"."enable_inotify", "t1"."enable_video_encoding", "t1"."library_path", "t1"."log_path", "t1"."number_of_workers", "t1"."out_container", "t1"."remove_subtitle_streams", "t1"."run_full_scan_on_start", "t1"."schedule_full_scan_minutes", "t1"."search_extensions", "t1"."video_codec", "t1"."video_stream_encoder", "t1"."overwrite_additional_ffmpeg_options", "t1"."additional_ffmpeg_options", "t1"."enable_hardware_accelerated_decoding" FROM "tasksettings" AS "t1" WHERE ("t1"."task_id" = ?) LIMIT ? OFFSET ? Params: [1, 1, 0] [2021-04-06 06:09:10,842 pyinotify WARNING] Event queue overflowed. [W 210406 06:09:10 pyinotify:929] Event queue overflowed. Any ideas on what I should do to resolve this?
  7. I just got Unmanic up and running and found two things that I wanted to check on... First, when I had the network set to Custom: br0 and left the IP blank so it could pull a DHCP address, it never pulls an IP or spins up the web interface. The Docker DHCP scope is defined and works on other containers. When I set the network to Bridged, it brings up the web interface without issues. Is there something that I'm missing? Second, I have everything set up and it is detecting videos that need optimized correctly, but the jobs are running as CPU only jobs and not using the GPU. I have the following lines in the log: [cont-init.d] 30-patch-nvidia: executing... Detected nvidia driver version: 460.67 Patch for this (460.67) nvidia driver not found. Is that why Unmanic isn't using the GPU? Thanks for any assistance you might be able to render.