[Support] HaveAGitGat - Tdarr: Audio/Video Library Analytics & Transcode Automation


374 posts in this topic Last Reply

Recommended Posts

  • Replies 373
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Tdarr   Application: Tdarr - https://github.com/HaveAGitGat/Tdarr Docker Hub: https://hub.docker.com/r/haveagitgat/tdarr GitHub: https://github.com/HaveAGitGat/Tdarr Documenta

Nvidia support is in the works. 

Click the info tab on a failed one do you see a permission errors?

Posted Images

10 hours ago, nicksphone said:

@Eggman1414 did that fix it? if not in options turn on Linux FFmpeg NVENC binary (3.4.5 for unRAID compatibility)

 

Yeah either that fixed it or me using a different plugin that used NVENC too. Shows up now in the GPU Stats. Never had that clicked because I thought it was for just version 3.4.5. Thank you

Link to post

I started a rather large batch of transcodes, and unRAID is now complaining about 98% usage of Docker.

 

I'm VERY new to Docker so I barely know where to start. But, reading other posts on here, I know at least to run these commands. And below is my Docker config screen. It looks like everything is mounted to the array, so I'm confused.

I did hop into the command line of the docker and found ~/Tdarr/bundle/programs/server had cache transcodes inside it. After the batch completed and those files were gone, the docker.img file did not shrink by much (99% to 86%)

 

Not sure where to go from here...

du -h -d 1 /var/lib/docker/
160K	/var/lib/docker/containerd
3.5M	/var/lib/docker/containers
0	/var/lib/docker/plugins
222G	/var/lib/docker/btrfs
31M	/var/lib/docker/image
44K	/var/lib/docker/volumes
0	/var/lib/docker/trust
56K	/var/lib/docker/network
0	/var/lib/docker/swarm
16K	/var/lib/docker/builder
56K	/var/lib/docker/buildkit
100K	/var/lib/docker/unraid
0	/var/lib/docker/tmp
0	/var/lib/docker/runtimes
222G	/var/lib/docker/
docker ps -s
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                              NAMES               SIZE
8ecb4d32c731        haveagitgat/tdarr_aio   "/bin/sh -c 'sudo se…"   3 days ago          Up 3 days           0.0.0.0:8265->8265/tcp             tdarr_aio           2.49GB (virtual 10.2GB)
6a79c033ca82        openspeedtest/latest    "/docker-entrypoint.…"   3 days ago          Up 3 days           3000/tcp, 0.0.0.0:3001->8080/tcp   OpenSpeedTest       2B (virtual 54.1MB)
b31f010c9faf        binhex/arch-krusader    "/usr/bin/tini -- /b…"   2 weeks ago         Up 2 weeks          5900/tcp, 0.0.0.0:6080->6080/tcp   binhex-krusader     35.1MB (virtual 1.92GB)
du -sh 
/mnt/user/system/docker/
20G	/mnt/user/system/docker/

1089443183_ScreenShot2020-08-19at8_11_41PM.thumb.png.d450a84d59fc26479843532000741d8a.png

Edited by joshbgosh10592
Link to post
17 hours ago, joshbgosh10592 said:

I started a rather large batch of transcodes, and unRAID is now complaining about 98% usage of Docker.

 

I'm VERY new to Docker so I barely know where to start. But, reading other posts on here, I know at least to run these commands. And below is my Docker config screen. It looks like everything is mounted to the array, so I'm confused.

I did hop into the command line of the docker and found ~/Tdarr/bundle/programs/server had cache transcodes inside it. After the batch completed and those files were gone, the docker.img file did not shrink by much (99% to 86%)

 

Not sure where to go from here...


du -h -d 1 /var/lib/docker/
160K	/var/lib/docker/containerd
3.5M	/var/lib/docker/containers
0	/var/lib/docker/plugins
222G	/var/lib/docker/btrfs
31M	/var/lib/docker/image
44K	/var/lib/docker/volumes
0	/var/lib/docker/trust
56K	/var/lib/docker/network
0	/var/lib/docker/swarm
16K	/var/lib/docker/builder
56K	/var/lib/docker/buildkit
100K	/var/lib/docker/unraid
0	/var/lib/docker/tmp
0	/var/lib/docker/runtimes
222G	/var/lib/docker/

docker ps -s
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                              NAMES               SIZE
8ecb4d32c731        haveagitgat/tdarr_aio   "/bin/sh -c 'sudo se…"   3 days ago          Up 3 days           0.0.0.0:8265->8265/tcp             tdarr_aio           2.49GB (virtual 10.2GB)
6a79c033ca82        openspeedtest/latest    "/docker-entrypoint.…"   3 days ago          Up 3 days           3000/tcp, 0.0.0.0:3001->8080/tcp   OpenSpeedTest       2B (virtual 54.1MB)
b31f010c9faf        binhex/arch-krusader    "/usr/bin/tini -- /b…"   2 weeks ago         Up 2 weeks          5900/tcp, 0.0.0.0:6080->6080/tcp   binhex-krusader     35.1MB (virtual 1.92GB)

du -sh 
/mnt/user/system/docker/
20G	/mnt/user/system/docker/

1089443183_ScreenShot2020-08-19at8_11_41PM.thumb.png.d450a84d59fc26479843532000741d8a.png

in the library under Transcode cache did you set a directory? Also you can set docker size to 500gig and it will only use the amount of space its currently using. you just telling docker it is allow to use upto that much space. If you have ever used VM's in windows think of it like a virtual hard drive the system thinks its x size but its only using y size on the real HD

Link to post
6 hours ago, nicksphone said:

in the library under Transcode cache did you set a directory? Also you can set docker size to 500gig and it will only use the amount of space its currently using. you just telling docker it is allow to use upto that much space. If you have ever used VM's in windows think of it like a virtual hard drive the system thinks its x size but its only using y size on the real HD

I did not specify a directory, I'm assuming that's part of my issue, and why the file size shrunk a little bit after the batch was completed. Where should that be pointed to? I'd love to have cache just go to RAM, as I have PLENTY of RAM, but am strapped for storage on my cache drive pool (for now).

I've read increasing the docker size is against best practices, is that correct? I can do it for now, but I'd rather figure out why this docker container keeps going crazy. That explanation makes sense though, as I'm actually a VM admin for the company I work for. So that setting just sets the largest amount the dynamically expanding "disk" can grow to. So thank you for explaining and making a comparison!

Edited by joshbgosh10592
Link to post
9 hours ago, joshbgosh10592 said:

I did not specify a directory, I'm assuming that's part of my issue, and why the file size shrunk a little bit after the batch was completed. Where should that be pointed to? I'd love to have cache just go to RAM, as I have PLENTY of RAM, but am strapped for storage on my cache drive pool (for now).

I've read increasing the docker size is against best practices, is that correct? I can do it for now, but I'd rather figure out why this docker container keeps going crazy. That explanation makes sense though, as I'm actually a VM admin for the company I work for. So that setting just sets the largest amount the dynamically expanding "disk" can grow to. So thank you for explaining and making a comparison!

if you want it to goto ram you need 2 steps

in the Container settings in the transcode host path put /tmp/

 

then in the library under Transcode cache folder put /home/Tdarr/cache just so you know its set

Link to post
7 hours ago, dalben said:

I installed the new container (named just Tdarr).  Is there anything special I need to do to get my CPU GPU activated?  It doesn't seem to be using QSV when encoding.

as far i know only 2 plugins work with QS are you using this one? Tdarr_Plugin_drdd_standardise_all_in_one
also installing intel gpu tools plugin helps to see if the igpu is being used if you do not already have it.
Another issue is new cpu's the igpu is somtimes not supported right out of the box and have to wait for a kernal update so the built in lynix drivers are in unraid this might also be an issue if have a new cpu.

Edited by nicksphone
added more info
Link to post
10 hours ago, nicksphone said:

if you want it to goto ram you need 2 steps

in the Container settings in the transcode host path put /tmp/

 

then in the library under Transcode cache folder put /home/Tdarr/cache just so you know its set

Thank you for those steps! Any idea how to shrink the docker.img file though?

Link to post
On 8/23/2020 at 2:20 PM, nicksphone said:

on the docker tab can you click container size and copy paste it for me maybe i can see your issue

Yup!

Name                              Container     Writable          Log
---------------------------------------------------------------------
tdarr_aio                           7.68 GB       395 kB      5.24 MB
binhex-krusader                     1.92 GB      35.1 MB      13.0 kB
Shinobi                             1.05 GB      93.2 MB       540 kB
unmanic                              567 MB       304 kB      5.72 kB
HandBrake                            424 MB          0 B      23.4 MB
QDirStat                             251 MB          0 B      23.4 MB
OpenSpeedTest                       54.1 MB          2 B        928 B
mergerfs-static-build               5.58 MB          0 B      23.4 MB

I knew tdarr_aio was the issue, I just don't know how to clean it up. Thank you!

Link to post
21 hours ago, joshbgosh10592 said:

Yup!


Name                              Container     Writable          Log
---------------------------------------------------------------------
tdarr_aio                           7.68 GB       395 kB      5.24 MB
binhex-krusader                     1.92 GB      35.1 MB      13.0 kB
Shinobi                             1.05 GB      93.2 MB       540 kB
unmanic                              567 MB       304 kB      5.72 kB
HandBrake                            424 MB          0 B      23.4 MB
QDirStat                             251 MB          0 B      23.4 MB
OpenSpeedTest                       54.1 MB          2 B        928 B
mergerfs-static-build               5.58 MB          0 B      23.4 MB

I knew tdarr_aio was the issue, I just don't know how to clean it up. Thank you!

yeah your not gonna get it much smaller than that the more trans codes you do the bigger it gets im over 8 gig now its the way hes coded the docker and as its still in beta i dont think getting all the debug and unessasary installs from pulling packs out of the docker is high on his list at the moment. the great thing about dockers if you do enough googling you can make your own of this and pull and delete whats not needed if you dont want to wait i have reach my limit of knowlage with you on this im still a newb on alot of unraid/lynx stuff.

Link to post
2 hours ago, nicksphone said:

yeah your not gonna get it much smaller than that the more trans codes you do the bigger it gets im over 8 gig now its the way hes coded the docker and as its still in beta i dont think getting all the debug and unessasary installs from pulling packs out of the docker is high on his list at the moment. the great thing about dockers if you do enough googling you can make your own of this and pull and delete whats not needed if you dont want to wait i have reach my limit of knowlage with you on this im still a newb on alot of unraid/lynx stuff.

There's gotta be something we can do, as now that my transcode cache is set to /tmp, the size hasn't changed at all, not even during even bigger batches. I just don't know enough to look at... Thank you though! Hopefully someone else will come across this thread and help us out.

Link to post
17 hours ago, joshbgosh10592 said:

There's gotta be something we can do, as now that my transcode cache is set to /tmp, the size hasn't changed at all, not even during even bigger batches. I just don't know enough to look at... Thank you though! Hopefully someone else will come across this thread and help us out.

I know its not orphaned trans-codes as i have reinstalled before and set it up perfectly before starting it up again when i had a drive failure i started everything from new again. i think it might be logs and debug tools and un-optimized pull packages. Maybe try the discord server he has setup.  

Link to post

My Tdarr was working fine for few months using QSV and QSV Handbrake profile.

Decided to support development, pledged and installed “Pro” version (haveagitgat/tdarr:pro_latest).

Tdarr immediately stoped encoding and now giving errors. I think the issue is with QSV support.

Rolled back to older “non pro” version and everything is working fine without any changes.

Will be  grateful for any ideas.

 

Command: 

HandBrakeCLI -i '/home/Tdarr/Media/Downloads/usenet/completed/series/South.Park.S11E01.720p.BluRay.X264-REWARD/11a6b4d251d34c599e51693b52976d78.mkv' -o '/home/Tdarr/cache/11a6b4d251d34c599e51693b52976d78-TdarrCacheFile-e8Y0fWCFm.mkv' --preset-import-file "/home/Tdarr/Documents/presets.json" -Z "myH265"

Last 200 lines of CLI log:

shellThread

Cannot load libnvidia-encode.so.1

[16:53:03] hb_init: starting libhb thread

[16:53:03] thread 152fa2276700 started ("libhb")

HandBrake 1.3.1 (2020080200) - Linux x86_64 - https://handbrake.fr

6 CPUs detected

Opening /home/Tdarr/Media/Downloads/usenet/completed/series/South.Park.S11E01.720p.BluRay.X264-REWARD/11a6b4d251d34c599e51693b52976d78.mkv...

[16:53:03] CPU: Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz

[16:53:03] - Intel microarchitecture Kaby Lake

[16:53:03] - logical processor count: 6

[16:53:03] Intel Quick Sync Video support: no

[16:53:03] hb_scan: path=/home/Tdarr/Media/Downloads/usenet/completed/series/South.Park.S11E01.720p.BluRay.X264-REWARD/11a6b4d251d34c599e51693b52976d78.mkv, title_index=1

...

...

...

[16:53:04] + bitrate: 640 kbps, samplerate: 48000 Hz

[16:53:04] + AC3 Passthru

[16:53:04] sync: expecting 32224 video frames

ERROR: encx265: x265_param_default_preset failed. Preset (balanced) Tune ((null))

ERROR: Failure to initialise thread 'H.265/HEVC encoder (libx265)'

[16:53:04] comb detect: heavy 0 | light 0 | uncombed 0 | total 0

[16:53:04] decomb: deinterlaced 0 | blended 0 | unfiltered 0 | total 0

[16:53:04] ac3-decoder done: 0 frames, 0 decoder errors

[16:53:04] h264-decoder done: 0 frames, 0 decoder errors

[16:53:04] sync: got 0 frames, 32224 expected

[16:53:04] Finished work at: Sat Aug 29 16:53:04 2020

[16:53:04] libhb: work result = 3

Encode failed (error 3).

HandBrake has exited.

677D3649-DF5A-4A16-8B02-03D6BBCF8276.png

B42A34C5-B085-470D-B7C1-A76D986FB66D.png

Edited by sdmellerd
Added pictures
Link to post
On 8/29/2020 at 10:15 PM, sdmellerd said:

My Tdarr was working fine for few months using QSV and QSV Handbrake profile.

Decided to support development, pledged and installed “Pro” version (haveagitgat/tdarr:pro_latest).

Tdarr immediately stoped encoding and now giving errors. I think the issue is with QSV support.

Rolled back to older “non pro” version and everything is working fine without any changes.

Will be  grateful for any ideas.

 

Command: 

HandBrakeCLI -i '/home/Tdarr/Media/Downloads/usenet/completed/series/South.Park.S11E01.720p.BluRay.X264-REWARD/11a6b4d251d34c599e51693b52976d78.mkv' -o '/home/Tdarr/cache/11a6b4d251d34c599e51693b52976d78-TdarrCacheFile-e8Y0fWCFm.mkv' --preset-import-file "/home/Tdarr/Documents/presets.json" -Z "myH265"

Last 200 lines of CLI log:

shellThread

Cannot load libnvidia-encode.so.1

[16:53:03] hb_init: starting libhb thread

[16:53:03] thread 152fa2276700 started ("libhb")

HandBrake 1.3.1 (2020080200) - Linux x86_64 - https://handbrake.fr

6 CPUs detected

Opening /home/Tdarr/Media/Downloads/usenet/completed/series/South.Park.S11E01.720p.BluRay.X264-REWARD/11a6b4d251d34c599e51693b52976d78.mkv...

[16:53:03] CPU: Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz

[16:53:03] - Intel microarchitecture Kaby Lake

[16:53:03] - logical processor count: 6

[16:53:03] Intel Quick Sync Video support: no

[16:53:03] hb_scan: path=/home/Tdarr/Media/Downloads/usenet/completed/series/South.Park.S11E01.720p.BluRay.X264-REWARD/11a6b4d251d34c599e51693b52976d78.mkv, title_index=1

...

...

...

[16:53:04] + bitrate: 640 kbps, samplerate: 48000 Hz

[16:53:04] + AC3 Passthru

[16:53:04] sync: expecting 32224 video frames

ERROR: encx265: x265_param_default_preset failed. Preset (balanced) Tune ((null))

ERROR: Failure to initialise thread 'H.265/HEVC encoder (libx265)'

[16:53:04] comb detect: heavy 0 | light 0 | uncombed 0 | total 0

[16:53:04] decomb: deinterlaced 0 | blended 0 | unfiltered 0 | total 0

[16:53:04] ac3-decoder done: 0 frames, 0 decoder errors

[16:53:04] h264-decoder done: 0 frames, 0 decoder errors

[16:53:04] sync: got 0 frames, 32224 expected

[16:53:04] Finished work at: Sat Aug 29 16:53:04 2020

[16:53:04] libhb: work result = 3

Encode failed (error 3).

HandBrake has exited.

677D3649-DF5A-4A16-8B02-03D6BBCF8276.png

B42A34C5-B085-470D-B7C1-A76D986FB66D.png

try is discord server you will get a faster response

Edited by nicksphone
spelling
Link to post
8 minutes ago, nekromantik said:

Whats the difference between the AIO version and standard?

the standard version you have to install the Database software separate and people were having issues use the AIO one just quicker to setup 

Edited by nicksphone
Link to post

Can't get hardware transcoding to work. I have the latest container named 'Tdarr AIO', enabled the 'Linux FFmpeg NVENC binary (3.4.5 for unRAID compatibility)' in options as well as used a script that utlitized NVENC but this is what I get. 

 

Right now plex can utilize hardware transcode with no issues so i know the graphics card and drivers work. 

 

 

 

[2020-09-07T20:49:46.534] [INFO] tdarr - Worker gY644hXqm:Launching sub-worker:
[2020-09-07T20:49:47.396] [INFO] tdarr - Worker gY644hXqm:Launching sub-worker successful:
[2020-09-07T20:49:47.396] [INFO] tdarr - Worker gY644hXqm:Sending command to sub-worker:/home/Tdarr/Tdarr/bundle/programs/server/assets/app/ffmpeg/ffmpeg345/ffmpeg -c:v h264_cuvid -i '/home/Tdarr/Media/Movies/Seven Pounds (2008)/Seven Pounds (2008) Bluray-1080p.mkv' -map 0 -map -0:d -c:v hevc_nvenc -rc:v vbr_hq -cq:v 19 -b:v 5627k -minrate 3938k -maxrate 7315k -bufsize 11254k -spatial_aq:v 1 -rc-lookahead:v 32 -c:a copy -c:s copy -map -0:s:0 -max_muxing_queue_size 4096 '/home/Tdarr/cache/Seven Pounds (2008) Bluray-1080p-TdarrCacheFile-kHpdnqmfs.mkv' 
[2020-09-07T20:49:57.114] [INFO] tdarr - Worker gY644hXqm:Sub-worker exit status received
[2020-09-07T20:49:57.118] [INFO] tdarr - Worker gY644hXqm:Exiting
[2020-09-07T20:49:57.119] [INFO] tdarr - onTranscodeError failed
[2020-09-07T20:49:57.122] [INFO] tdarr - Worker exited
[2020-09-07T20:53:53.900] [INFO] tdarr - Worker yY16CIPSD:Sub-worker exit status received

Link to post
7 hours ago, whatupcraig said:

Can't get hardware transcoding to work. I have the latest container named 'Tdarr AIO', enabled the 'Linux FFmpeg NVENC binary (3.4.5 for unRAID compatibility)' in options as well as used a script that utlitized NVENC but this is what I get. 

 

Right now plex can utilize hardware transcode with no issues so i know the graphics card and drivers work. 

 

 

 

[2020-09-07T20:49:46.534] [INFO] tdarr - Worker gY644hXqm:Launching sub-worker:
[2020-09-07T20:49:47.396] [INFO] tdarr - Worker gY644hXqm:Launching sub-worker successful:
[2020-09-07T20:49:47.396] [INFO] tdarr - Worker gY644hXqm:Sending command to sub-worker:/home/Tdarr/Tdarr/bundle/programs/server/assets/app/ffmpeg/ffmpeg345/ffmpeg -c:v h264_cuvid -i '/home/Tdarr/Media/Movies/Seven Pounds (2008)/Seven Pounds (2008) Bluray-1080p.mkv' -map 0 -map -0:d -c:v hevc_nvenc -rc:v vbr_hq -cq:v 19 -b:v 5627k -minrate 3938k -maxrate 7315k -bufsize 11254k -spatial_aq:v 1 -rc-lookahead:v 32 -c:a copy -c:s copy -map -0:s:0 -max_muxing_queue_size 4096 '/home/Tdarr/cache/Seven Pounds (2008) Bluray-1080p-TdarrCacheFile-kHpdnqmfs.mkv' 
[2020-09-07T20:49:57.114] [INFO] tdarr - Worker gY644hXqm:Sub-worker exit status received
[2020-09-07T20:49:57.118] [INFO] tdarr - Worker gY644hXqm:Exiting
[2020-09-07T20:49:57.119] [INFO] tdarr - onTranscodeError failed
[2020-09-07T20:49:57.122] [INFO] tdarr - Worker exited
[2020-09-07T20:53:53.900] [INFO] tdarr - Worker yY16CIPSD:Sub-worker exit status received

Can you click info on one of the failed transcodes and put it in a pastebin and link it. also can you screen shot your docker config page?

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.