OFark

Members
  • Posts

    128
  • Joined

  • Last visited

Everything posted by OFark

  1. Can you send me a screenshot of the value you are trying to set?
  2. When you say it doesn't work... Doesn't start? The client webpage is a 404? What doesn't work?
  3. Found it - Settings - User Utilities - Compose. Thanks. That got it working.
  4. Um, Not sure where I can change this? Do you mean Unraid settings, or this plugins settings? (Not that I've found any settings for this plugin) If you mean Unraid settings, I don't see an output style option.
  5. Seem as of 6.10 the docker compose commands just show a 404 window. Server nginx: 2022/05/19 12:32:38 [error] 14890#14890: *80825 "/usr/local/emhttp/dockerterminal/compose_manager_action/index.html" is not found (2: No such file or directory) while sending to client, client: #####, server: , request: "GET /dockerterminal/compose_manager_action/ HTTP/1.1", host: "#####", referrer: "http://#####/plugins/compose.manager/php/show_ttyd.php?done=Done"
  6. Have you published an app, got it into Docker and it's not recognising updates?
  7. I created a docker app in CA just for Diskover. Search for OFark or ElasticSearch. It's been a while but I'm assuming it still works.
  8. This won't start any more. It was fine yesterday, and the system has just been doing it's thing, then today I get this: ---Checking if UID: 99 matches user--- ---Checking if GID: 100 matches user--- ---Setting umask to 0000--- ---Checking for optional scripts--- ---No optional script found, continuing--- ---Starting...--- ---Can't get latest version of NZBHydra2, putting container into sleep mode!--- No NZBHydra any more. I've tried renaming the yml file, no effect.
  9. Are you using Hardware Decoding? Which is one of the first options, if so can you see if you get the same issue with it switched off?
  10. Ok, not sure about this, but can you try: After the "-hwaccel_device /dev/dri/renderD128" in the Additional Parameters adding a space and then : -vf "hwupload_cuda" I'm basing this off of a StackOverflow issue from the problem in the log: Impossible to convert between the formats supported by the filter 'Parsed_null_0' and the filter 'auto_scaler_0' It seems like you need to tell FFmpeg what to do with QSV, which is why I haven't managed to implement a one button for this yet.
  11. Right, The error message you have there System.IO.FileNotFoundException will (probably) be from the post processing work, checking the file for size, AFTER FFmpeg has done it's thing. Meaning, there must be another error, from FFmpeg further up the log.
  12. Yes, yes you can. You can tell FFmpeg to just copy streams. With the Audio stream selector you can tell it to to delete tracks that are of a specific language, or to only copy eng or unknown tracks. You can also tell it to delete the track but only if there are other tracks, so films in only Mandarin, won't be silent.
  13. Wonderful, it's getting somewhere then. What curious with this missing file is that before it must have been finding the file so that FFmpeg could fail against it. If you SSH into the container again and follow that path "/storage/.. etc. I suspect it's not going to be there and you need to check the Docker mappings. But you must have had it right before?
  14. It was added a few months ago. February to be precise, a release was made and the issue around the 10bit pipeline closed: 10 bit pipeline if there is no 8bit only filter enabled and the selected encoder is 10bit; HDR10 static metadata passthrough Colorspace filter (using the FFmpeg tonemap filter, which is not the best, and does not implement BT.2390 yet). It was mentioned at this time that this support is only in the nightly build, but that was back in Feb so it may have made it to the stable builds by now.
  15. Does your Nvidia card support 10bit? Because Main10 should do it. https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
  16. Apologies for the delay, mad week at work. Your problem is: No VA display found for any default device. This basically means it can't find which device to use for the decoding the input stream. Normally this goes through a bunch of hardware locations, such as the current display (not relevant with docker) then some pre-determined locations. What you need to do is tell it which device got passed through using /dev/dri/ You can do this by going to the console, typing cd /dev/dri ls that should give you a list of devices. If you see renderD128, then something has gone wrong with your device forwarding in Docker, as this is one of those default locations FFmpeg will look (But you may want to try it anyway). Otherwise you can specify the following in the "Additional Arguments" box at the bottom of the FFmpeg preset. -hwaccel_device /dev/dri/renderD128 note* replace the renderD128 with whatever devices you have. Final note* CPU encoding whilst much slower is much better in terms of quality vs file size. Hardware encoding is much better in terms of speed only.
  17. So if you open the log for the individual file, (i) next to the line in the Job panel. You will see the output for FFMpeg, I can almost guarantee it's failed to process your video, but it started and there was an output. I'll not bore you with the detail but basically FFMpeg uses the error output of it's process to output it's progress. I have no idea why but I cannot use that for error checking. So that's why your seeing FFProbe errors, I can rely on that to detect that FFMpeg didn't finish properly. You'll need to see what FFMpeg ended with, there'll be lengthy white section of code that is the output of FFMpeg, this usually ends with the error details. From past experience with QSV and having looked at your profile, I think you'll need to tell FFMpeg how to use QSV, it's not provided in a driver like the Nvidia support is. It's also a lot smaller in terms of files sizes. Check out the following link: https://trac.ffmpeg.org/wiki/Hardware/QuickSync You can see what Compressarr is passing to FFMpeg and you can specify the additional parameters for FFMpeg in the Additional Parameters box in the Profile page in Compressarr. I do plan to build a pre-built template for this, but it's not at the top of my priority list right now. If your struggling with the settings, let me know and I'll see if I can help.
  18. Thanks for that. It seems that the Github compiled code is one line shorter, I've found that bug and squashed it. Github is preparing releases as I type.
  19. In my best programming voice: "What??" Basically the line in my source code, where that error is happening, that can't happen. There is no object reference. Can I just confirm you are running the latest version? If I assume you are running the latest version, my next message would be: can you send me the appsettings.json file from your appdata folder. It containes the URL and the API keys for Sonarr/Radarr so you may want to scrub those if it's publicly accessible.
  20. When you setup a job there is a destination folder, that will list all the folders in the docker image, one of those should be a map (that you've specified) to an external folder, which is where you want the encoded files to go. I'm not sure what would happen if you created a map to a folder that already existed within the image, which one would "win", so to speak.
  21. v4 is out? OFGS. That may take a few days to sort out.
  22. Hi, First off we need to check to see if the docker containers can see each other: Console to the Compressarr app and type the following in to install curl: apt-get install curl then once that's done type curl and then the url to Radarr/Sonarr so, and I'm guessing here but it looks like your might be: curl 192.168.1.50:7878/radarr This isn't going to work, because you'll need to authenticate, but it should return some HTML with an error message in it, rather than nothing or a "Could not resolve host" or a "Failed to connect to..." error. If it returns some successful HTML then I need you to change the logging to Debug, try again and then send me a copy of the logs, and a screenshot of the status of the connection shown when it tries to connect and fails. Obviously, blur out your API key.
  23. Today Docker stopped supporting automated builds for free accounts. Apologies for any issues there may have been during a shift to Github action builds.
  24. And thank you for not giving up and helping me make improvements.
  25. No, the container is called Matroska. It's one of the most difficult things to wrap; Containers are ways of containing the data, but can have multiple extensions. And an extension could belong to different containers. So to get round that I have to ask for what container you want to use and then pick the first appropriate file extension. FFmpeg normally has the file extension as the name of the container, but not for Matroska, just to be difficult, it lists it as the full name.