IamSpartacus

Members
  • Posts

    802
  • Joined

  • Last visited

Everything posted by IamSpartacus

  1. Not sure if this is a 6.8RC1 issue or some other temporary docker image repository issue but all my dockers shows they have an update and won't update. The previous script fix for this in 6.7.2 doesn't fix it. Diagnostics attached. beast-diagnostics-20191015-1731.zip
  2. Last I heard the team was having issues with the latest linux kernels and that's so idk how quick this one will get released.
  3. My fear is we dont see 6.8 until they figure out the SQLite issue which is anyone's guess.
  4. Is there a permanent fix for this? Seems this has to be done every time on certain VMs (ie. MacOS VM).
  5. Oh hmmm, it finished a lot quicker than I thought. I was looking at the time going up and that was the time of the movie not the time left. When it finished, I didn't see and ending stats like you showed though.
  6. GTX 1660. Tremendous value GPU for HW acceleration. Can do 20 1080p > 720p transcodes or 5 4K > 720p transcodes at once. Lower power usage. All for $220.
  7. Hmmm I wonder why my FPS is so low then. It's been holding steady at 169 FPS the entire time. I have heard the new Turing GPU's are a little slower and high quality but this seems really slow.
  8. The one line worked. What kind of FPS do you get with that command and what GPU are you using? Trying to compare to my GTX 1660.
  9. With or without the quotes like you have in your initial command? EDIT: Nvmd, it worked without the quotes. I guess I needed to remove them in my above command?
  10. Yup. NVENC works fine in your handbrake container and in both Plex and Emby. Hmmmm.
  11. Running this command on my system gives me the following error. Any ideas? docker run --rm --runtime=nvidia -v "/mnt/user/Downloads/handbrake/watch:/input:rw" -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-90490eba-cd03-39f1-d641-3360df982f5a' djaydev/ffmpeg-ccextractor \ > ffmpeg -hwaccel nvdec -i "/input/test.mkv" -c:v hevc_nvenc -c:a copy "/input/test1.mvk" docker: invalid reference format. See 'docker run --help'.
  12. I've got a pretty beefy server with plenty of CPU/RAM/room for GPUs) to spare. I'd love to be able to add 1-2 zero/thin clients to bedrooms in my home to be used for basic web browsing, word processing, Youtube, etc. Is anyone doing this using nothing but a zero/thin client connected via Cat6?
  13. The mover is absolutely going to affect cache since it's going to be doing reads off of cache. So if you have trying to play a 4K remux off cache at the same time the mover is reading from cache I'm not surprised you are having issues.
  14. If I'm reading this post correctly, you are running Unraid in some type of production environment? If that's the case, why in the world would you be running a 3rd party non-officially supported OS version? That just can't happen in a production environment IMO.
  15. I understand your frustration. However, when you use a 3rd party plugin/application that is not part of the base OS and thus not maintained by that base OS development team, these things can and will happen from time to time. It's just part of the knowledge you must have when you use a 3rd party plugin like this. I realize that doesn't take away your frustration, but your post comes off to the LSIO team (and mainly @CHBMB) as you diminishing the fact that he is spending his free time to do this work. If you really don't want to be in this kind of situation again, you should configure your server in a way that will not require the use of 3rd party supported tools (ie. use a CPU with iGPU or just don't use HW transcoding at all in Unraid).
  16. @fryfrog I see you've been testing v3.10.5 since about 13 days ago? Stable? If so, wanna move it to the latest tag?
  17. Has anyone successfully gotten the inputs.nvidia_smi plugin working in telegraf? Even with the --runtime=nvidia extra paramater and the NVIDIA_DRIVER_CAPABILITIES/NVIDIA_VISIBLE_DEVICES variables I can not get it to work. I just get: Yet inside the container I can see nvidia-smi in /usr/bin. EDIT: Turned out to be the alpine repo. Changed to latest and now it works.
  18. Yes inside telegraf. But when I run 'which nvidia-smi' inside the container it returns /usr/bin/nvidia-smi and I see it there when I cd to /usr/bin/ inside the container as well.
  19. Has anyone successfully gotten telegraf working with inputs.nvidia-smi? Even adding all the nvidia variables to the container I still get the following error in my telegraf logs:
  20. I've created a bug report so Limetech can hopefully address this issue.
  21. Please see this thread for more information. I've seen the same behavior since 6.6 and now on to 6.7. Also with 2 completely different servers (completely different hardware).
  22. Ok so even trying to do multiple scatter jobs doesn't seem to work. The behavior you describe above is not happening. No matter what, the data is getting moved to disk8 even though it does not have the most free space. So I'm not sure if I'm doing something wrong or if Unbalance will just always start with the first chosen disk regardless of free space available.