The stream froze because ffmpeg is getting killed as a high memory consumer when the OOM procedure runs:
Jan 30 01:12:57 Alexandria kernel: Lidarr invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
...
...
Jan 30 01:12:57 Alexandria kernel: Tasks state (memory values in pages):
Jan 30 01:12:57 Alexandria kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Jan 30 01:12:57 Alexandria kernel: [ 19532] 0 19532 1466611 1447233 11796480 0 0 ffmpeg
...
...
Jan 30 01:12:57 Alexandria kernel: Out of memory: Killed process 19532 (ffmpeg) total-vm:5866444kB, anon-rss:5789584kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:11520kB oom_score_adj:0
Jan 30 01:12:58 Alexandria kernel: oom_reaper: reaped process 19532 (ffmpeg), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
That just tells us what got killed as a result of the oom state, not what was consuming the most memory.
The biggest memory consumer at the time ffmpeg was killed was this java process:
Jan 30 01:12:57 Alexandria kernel: Tasks state (memory values in pages):
Jan 30 01:12:57 Alexandria kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
Jan 30 01:12:57 Alexandria kernel: [ 21823] 99 21823 3000629 145993 2207744 0 0 java
But this table only keeps track of running processes, not files stored in tmpfs, for example. From what I can see, tmpfs doesn't appear to be using that much memory.