jackfalveyiv

Members
  • Posts

    37
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jackfalveyiv's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Not sure if this is the right place to ask for help, but I have a weird issue. I have two Radarr instances in my setup, one standard and one 4k. Today, my 4k instance is authenticating, but the page never loads fully and doesn't get me into the console, while my standard docker is accessible via the proxy link. I can access the docker locally and authenticate, and get at what I need, but it does not work correctly when proxied. I've attached the most relevant screenshots I can think of. Thanks in advance for any guidance. ***UPDATE*** This issue has resolved itself, apparently. No action needed, 4K instance is coming up as expected.
  2. I would like to set up a cron job to delete all files and folders/directories older than 24 hours in my downloads directory. I have made a few attempts to do this but cron is proving to be a bit tougher to understand than I previously expected. I've tried a few versions of the following commands: find /mnt/user/path/path/* -type d -mtime +0 -print0 -exec rm {} + find /mnt/user/path/path/* -type f -mtime +7 -print0 | xargs -0 rm find /mnt/user/path/path/* -mtime -1 -ls -exec rm -d {} + find /mnt/user/path/path/* -mtime -1 -ls -exec rmdir -f {} + I'm finding varying degrees of success, but essentially the downloads folder is broken into a few subdirectories. I want to clean out those subdirectories of all files and folders older than 24 hours on a weekly scheduled job. I have not been able to get this working the way I want. I have run a combination of these scripts with some different options enabled, but I'm looking for one script to do what I need. Does anyone have a better example of what I'm trying to accomplish?
  3. I'm definitely interested in this. I unfortunately lost my djaydev docker a week ago, I had assumed the GPU transcoding functionality would be possible with this build.
  4. I'm having a difficult time getting Bazarr to recognize all of my Movie shares. I've added some paths in the docker configuration since I have multiple shares where my movies live in. I've attempted to use Path Mapping to map the path of a share in Radarr with the one created for Sonarr, but it seems that the subtitles are not being pulled because I'm getting errors such as this: Traceback (most recent call last):   File "/app/bazarr/bin/bazarr/get_subtitle.py", line 70, in get_video     video = parse_video(path, hints=hints, providers=providers, dry_run=used_scene_name,   File "/app/bazarr/bin/bazarr/../libs/subzero/video.py", line 61, in parse_video     return scan_video(fn, hints=hints, dont_use_actual_file=dry_run, providers=providers,   File "/app/bazarr/bin/bazarr/../libs/subliminal_patch/core.py", line 501, in scan_video     raise ValueError(\'Path does not exist\') ValueError: Path does not exist Is using individual Path Mappings the way to ensure all my movie shares are going to be found in Bazarr, or is there another way to do this?
  5. After some investigation I have determined that my Plex DB is corrupt. Following Spaceinvader One's tutorial got me to "Error: database disk image is malformed" when running PRAGMA integrity_check. So, I followed his tutorial and tried a dump/read of the database. When the new db is being created, it seems to fill with data almost all the way, then stop and show as an empty db. I was able to verify this happens every time I attempt to create a new DB from the old DB's data via Krusader. So, I decided to take a copy of the DB to my Mac and use SQLite browser to try an export/import. When I do, the export is successful, but at import I get an error "Error importing data: Error in statement #139: no such module: spellfix1. Aborting execution and rolling back." I have no recourse to fix this issue, but I need someone who knows a bit more about SQL to help me find a way to get this DB back in shape. I'm convinced that the db corruption is the root of all my issues, but I can't figure out why it's so difficult to get this back in some sort of working order.
  6. My issue hasn't gone away, but my troubleshooting has led me to trying to do an integrity check on my Plex DB to see if there's an error. When attempting to do the check, I'm getting an "unable to open database file" error. I'm running the PRAGMA integrity_check from inside the Unraid console (actually following Spaceinvaderone's tutorial) and hitting this wall. I'm logged in as root and I believe the permissions are set correctly for this directory and this DB, yet still nothing actionable. Has anyone else run into this?
  7. I've got another problem, I'm finding slow SQL queries in the library. I think this might be the root of a whole lot of "Couldn't retrieve play queue item" errors, both locally and remotely. Can someone take a look at this diagnostic and help me troubleshoot? trescommas-diagnostics-20211025-1935.zip
  8. My issue is resolved, a second reboot of my Unraid server solved the problem and transcoding is back to running normally.
  9. Managed to fight my way through the previous error, but came across a new one. I had been transcoding successfully for the past week, but my Node stopped a few hours ago and I'm receiving the below errors in the log. Both docker containers for Server and Node are running and I haven't made any configuration changes to either one, unsure what has caused this new problem. Also, I can confirm Health Checks are running without an issue, and I am using the GPU for both tasks.
  10. I've been experiencing a problem with file playback. It seems that random movie and TV episodes are failing to play. I have a pre-roll that runs in front of any movie that starts, which pretty reliably loads, but both locally and remotely there seems to be a random assortment of videos that won't play. I have even re-ripped a few DVDs and blurays to try h265, x265, mp4, mkv and at different bitrates, and yet I can't find what is causing the issue. I'm receiving a ton of errors in the Plex docker logs stating the following: [mp3 @ 0x152397af5240] Header missing Error while decoding stream #0:0: Invalid data found when processing input I've been hunting for answers across some of the Reddit Plex subs as well as here to no avail. If can provide me with some additional things to troubleshoot I'd greatly appreciate it.
  11. As an update, it seems like file size is not a determinative factor as files as low as 1.4gb are also failing, and I still can't find the reason why.
  12. LInking my issue here: I am not sure what is causing the issue, but very consistently larger files are failing while smaller ones are succeeding. I am need a little guidance on what else I can troubleshoot here, but I'm curious whether or not this is related to the plugins I'm using. Any help would be appreciated.
  13. You were correct that my Transcode Cache path was set incorrectly, so thank you for that. After the change, I re-queued the failed transcodes and unfortunately many of the same files are failing, with the same error. I've attached a screenshot of the plugins I'm using, and I will note that the transcodes are going much faster than they were previously, which is of course great. Many of the failed transcodes are Remux Bluray 1080p files, is it possible one of these plugins is having difficulty parsing some of the data? I attached a second screenshot with a sample of one of the information tabs for a failed transcode.
  14. I'm running Tdarr in an Unraid Docker, after following the recent Spaceinvaderone guide. I have a large library and am experiencing problems transcoding large files (File sizes ranging from 4gb to 50gb). I keep receiving errors such as: [hevc_nvenc @ 0x55b3f455ef00] OpenEncodeSessionEx failed: out of memory (10) [hevc_nvenc @ 0x55b3f455ef00] No NVENC capable devices found Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed! I also receive an error stating that my /temp directory is out of space when trying to write the file. My /temp share is on a 1TB SSD and I have 32GB of memory, according to the Unraid dashboard I'm not seeing RAM spikes above 70% at most. Tdarr is successfully transcoding other files under 4GB, but I can't find the bottleneck. I tested while running only one worker as well as up to 5 simultaneously, and fairly consistently the smaller files are working, while the larger ones seem to be getting up to 70-80% completion before going into the Error/Cancelled category. There are some files that fail instantaneously. Any help would be appreciated.