OFark

Members
  • Posts

    121
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

OFark's Achievements

Apprentice

Apprentice (3/14)

10

Reputation

  1. This won't start any more. It was fine yesterday, and the system has just been doing it's thing, then today I get this: ---Checking if UID: 99 matches user--- ---Checking if GID: 100 matches user--- ---Setting umask to 0000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Starting...--- ---Can't get latest version of NZBHydra2, putting container into sleep mode!--- No NZBHydra any more. I've tried renaming the yml file, no effect.
  2. Are you using Hardware Decoding? Which is one of the first options, if so can you see if you get the same issue with it switched off?
  3. Ok, not sure about this, but can you try: After the "-hwaccel_device /dev/dri/renderD128" in the Additional Parameters adding a space and then : -vf "hwupload_cuda" I'm basing this off of a StackOverflow issue from the problem in the log: Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0' It seems like you need to tell FFmpeg what to do with QSV, which is why I haven't managed to implement a one button for this yet.
  4. Right, The error message you have there System.IO.FileNotFoundException will (probably) be from the post processing work, checking the file for size, AFTER FFmpeg has done it's thing. Meaning, there must be another error, from FFmpeg further up the log.
  5. Yes, yes you can. You can tell FFmpeg to just copy streams. With the Audio stream selector you can tell it to to delete tracks that are of a specific language, or to only copy eng or unknown tracks. You can also tell it to delete the track but only if there are other tracks, so films in only Mandarin, won't be silent.
  6. Wonderful, it's getting somewhere then. What curious with this missing file is that before it must have been finding the file so that FFmpeg could fail against it. If you SSH into the container again and follow that path "/storage/.. etc. I suspect it's not going to be there and you need to check the Docker mappings. But you must have had it right before?
  7. It was added a few months ago. February to be precise, a release was made and the issue around the 10bit pipeline closed: 10 bit pipeline if there is no 8bit only filter enabled and the selected encoder is 10bit; HDR10 static metadata passthrough Colorspace filter (using the FFmpeg tonemap filter, which is not the best, and does not implement BT.2390 yet). It was mentioned at this time that this support is only in the nightly build, but that was back in Feb so it may have made it to the stable builds by now.
  8. Does your Nvidia card support 10bit? Because Main10 should do it. https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
  9. Apologies for the delay, mad week at work. Your problem is: No VA display found for any default device. This basically means it can't find which device to use for the decoding the input stream. Normally this goes through a bunch of hardware locations, such as the current display (not relevant with docker) then some pre-determined locations. What you need to do is tell it which device got passed through using /dev/dri/ You can do this by going to the console, typing cd /dev/dri ls that should give you a list of devices. If you see renderD128, then something has gone wrong with your device forwarding in Docker, as this is one of those default locations FFmpeg will look (But you may want to try it anyway). Otherwise you can specify the following in the "Additional Arguments" box at the bottom of the FFmpeg preset. -hwaccel_device /dev/dri/renderD128 note* replace the renderD128 with whatever devices you have. Final note* CPU encoding whilst much slower is much better in terms of quality vs file size. Hardware encoding is much better in terms of speed only.
  10. So if you open the log for the individual file, (i) next to the line in the Job panel. You will see the output for FFMpeg, I can almost guarantee it's failed to process your video, but it started and there was an output. I'll not bore you with the detail but basically FFMpeg uses the error output of it's process to output it's progress. I have no idea why but I cannot use that for error checking. So that's why your seeing FFProbe errors, I can rely on that to detect that FFMpeg didn't finish properly. You'll need to see what FFMpeg ended with, there'll be lengthy white section of code that is the output of FFMpeg, this usually ends with the error details. From past experience with QSV and having looked at your profile, I think you'll need to tell FFMpeg how to use QSV, it's not provided in a driver like the Nvidia support is. It's also a lot smaller in terms of files sizes. Check out the following link: https://trac.ffmpeg.org/wiki/Hardware/QuickSync You can see what Compressarr is passing to FFMpeg and you can specify the additional parameters for FFMpeg in the Additional Parameters box in the Profile page in Compressarr. I do plan to build a pre-built template for this, but it's not at the top of my priority list right now. If your struggling with the settings, let me know and I'll see if I can help.
  11. Thanks for that. It seems that the Github compiled code is one line shorter, I've found that bug and squashed it. Github is preparing releases as I type.
  12. In my best programming voice: "What??" Basically the line in my source code, where that error is happening, that can't happen. There is no object reference. Can I just confirm you are running the latest version? If I assume you are running the latest version, my next message would be: can you send me the appsettings.json file from your appdata folder. It containes the URL and the API keys for Sonarr/Radarr so you may want to scrub those if it's publicly accessible.
  13. When you setup a job there is a destination folder, that will list all the folders in the docker image, one of those should be a map (that you've specified) to an external folder, which is where you want the encoded files to go. I'm not sure what would happen if you created a map to a folder that already existed within the image, which one would "win", so to speak.
  14. v4 is out? OFGS. That may take a few days to sort out.
  15. Hi, First off we need to check to see if the docker containers can see each other: Console to the Compressarr app and type the following in to install curl: apt-get install curl then once that's done type curl and then the url to Radarr/Sonarr so, and I'm guessing here but it looks like your might be: curl 192.168.1.50:7878/radarr This isn't going to work, because you'll need to authenticate, but it should return some HTML with an error message in it, rather than nothing or a "Could not resolve host" or a "Failed to connect to..." error. If it returns some successful HTML then I need you to change the logging to Debug, try again and then send me a copy of the logs, and a screenshot of the status of the connection shown when it tries to connect and fails. Obviously, blur out your API key.