Margucci

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by Margucci

  1. I'm an idiot I figured it out. I didn't say yes for allow user share. However, I did try and delete this topic hours ago and for some reason it didn't delete.
  2. I am encountering a strange issue today. Earlier I needed to shuffle some drives around and rebuilt my cache drive. I had 2 drives in a ZFS pool for my primary cache and then an additional 6 drives in another ZFS pool. I had different shares using each pool as their primary storage with the array as secondary. Using the mover I moved everything off both ZFS pools to the array, deleted the ZFS pools and then created another ZFS pool named Cache with all 8 drives (Z2 pool). I started the array and set the mover to move my appdata and system shares to the newly created ZFS pool. No problems at all. the mover ran without issues dumping all my files over. I left it and when I came back later and went to set my other shares to do the same thing I noticed that the appdata and system shares were no longer listed. The files were on the cache drive and I could start up my docker containers without issue even with no share listed. Than I looked into my other shares and I was not able to select the cache drive at all as a location for those other shares. It shows it exists, but I am not able to select it as an option for either the primary or secondary storage location. It is just greyed out and I am unable to click on it. I did make the mistake of not changing the shares to array only prior to deleting the share. However, any documentation I have found says I should be able to do that after the fact without any issue. In addition, if I attempt to create a new share, the same thing happens and I am unable to select the cache ZFS pool as a storage location. It is greyed out as well and unable to be selected. Any assistance is appreciated. I have included a couple of screenshots for context as well as my logs. And yes, I have fully rebooted the server as well. unraid-syslog-20231214-0144.zip EDIT: I did some more playing around and recreated the original pools and removed the old pools from the storage options. But I am still having the same problem. For some reason it is coming up as a disk share instead of a cache pool.
  3. I was just able to shut down the docker service, turn it back on, then everything started up like normal. I encountered my issue after updating 3 containers. 2 updated and started fine. one did not. Also on .12.1
  4. thanks for the heads up on that plugin. i missed it when i was looking through things. exactly what i wanted.
  5. looks like vbr_hq is still causing an issue. it just needs to be replaced in the plugin with vbr (although it looks like it sets that as the default anyways so the entire "-rc vbr_hq" can be deleted. i did some testing with and without and the file size comes to the exact same thing. i did come across an update to the nvenc presets from last year: https://docs.nvidia.com/video-technologies/video-codec-sdk/nvenc-preset-migration-guide/ so it looks like it isnt a FFmpeg issue, but a nvenc issue. also, is it possible to do a feature request. an option to only replace if the converted file is smaller than the original one. otherwise the converted one is discarded. i find that some low quality webrips in 264 are sometimes already decently small and encoding them to 265 if you prefer higher quality where available results in an increase in file size.
  6. i removed the "-rc vbr_hq" from the profiles in the plugin and it does work again, however at a slower speed. should i have removed just the "vbr_hq" leaving the "-rc" in there? i have virtually no experience with ffmpeg. thanks a ton for the help though. at least its working now EDIT worked everything out. thanks for pointing me in the proper direction initially.
  7. i have been keeping up to date on the updates. so it was whichever version was current yesterday. so is it a plugin thing that it needs to be updated to account for the changes? even then, i didnt see anything in the notes about it being updated between 2 days ago and now.
  8. I encountered the same issue as everyone else. I got everything installed again and up and running. However, using NVENC encoding all transcodes fail. The health checks go through ok but then when it comes time for the actual transcode to happen i get this error. [hevc_nvenc @ 0x5629e2d3e280] Specified rc mode is deprecated. [hevc_nvenc @ 0x5629e2d3e280] Use -rc constqp/cbr/vbr, -tune and -multipass instead. [hevc_nvenc @ 0x5629e2d3e280] InitializeEncoder failed: invalid param (8): Preset P1 to P7 not supported with older 2 Pass RC Modes(CBR_HQ, VBR_HQ) and cbr lowdelayEnable NV_ENC_RC_PARAMS::multiPass flag for two pass encoding and set Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height Conversion failed! no settings at all have changed since reinstalling. the extra parameters, gpu id, and capabilities are all in there. from the error im seeing could it be an issue with the plugin im using? the one i am using is: Tdarr_Plugin_vdka_Tiered_NVENC_CQV_BASED_CONFIGURABLE any help would be appreciated