Jump to content

IamSpartacus

Members
  • Posts

    802
  • Joined

  • Last visited

Posts posted by IamSpartacus

  1. 1 minute ago, dee31797 said:

    wow fancy, which turing you got?  I want one because they can do B-frames, better file sizes at better quality levels.

     

    GTX 1660.  Tremendous value GPU for HW acceleration.  Can do 20 1080p > 720p transcodes or 5 4K > 720p transcodes at once.  Lower power usage.  All for $220.

  2. 3 minutes ago, dee31797 said:

    I did it with the quotes, you mainly need the quotes when there's spaces in between, in this case you don't need them.

     

    I ran your command and it started up the container, I don't have a file there so it didnt convert anything but I didn't get the docker error before it tried.

     

    @unraid:~# docker run --rm --runtime=nvidia -v "/mnt/user/Downloads/handbrake/watch:/input:rw" -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-6315e2bf-1c81-cc19-bfb3-a24978448a5e' djaydev/ffmpeg-ccextractor \
    > ffmpeg -hwaccel nvdec -i "/input/test.mkv" -c:v hevc_nvenc -c:a copy "/input/test1.mvk"

    ffmpeg version 4.0.3 Copyright (c) 2000-2018 the FFmpeg developers
      built with gcc 7 (Ubuntu 7.3.0-27ubuntu1~18.04)
      configuration: --disable-debug --disable-doc --disable-ffplay --enable-vaapi --enable-shared --enable-avresample --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-gnutls --enable-gpl --enable-libass --enable-libfreetype --enable-libvidstab --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libx264 --enable-libkvazaar --enable-libaom --extra-libs=-lpthread --enable-postproc --enable-cuvid --enable-nvenc --enable-version3 --extra-cflags=-I/opt/ffmpeg/include --extra-ldflags=-L/opt/ffmpeg/lib --extra-libs=-ldl --prefix=/opt/ffmpeg
      libavutil      56. 14.100 / 56. 14.100
      libavcodec     58. 18.100 / 58. 18.100
      libavformat    58. 12.100 / 58. 12.100
      libavdevice    58.  3.100 / 58.  3.100
      libavfilter     7. 16.100 /  7. 16.100
      libavresample   4.  0.  0 /  4.  0.  0
      libswscale      5.  1.100 /  5.  1.100
      libswresample   3.  1.100 /  3.  1.100
      libpostproc    55.  1.100 / 55.  1.100
    /input/test.mkv: No such file or directory

     

    The one line worked.

     

    What kind of FPS do you get with that command and what GPU are you using?  Trying to compare to my GTX 1660.

  3. 4 minutes ago, dee31797 said:

    Maybe try it all on one line?

     

    
    docker run --rm --runtime=nvidia -v /mnt/user/Downloads/handbrake/watch:/input:rw -e NVIDIA_DRIVER_CAPABILITIES=all -e NVIDIA_VISIBLE_DEVICES=GPU-6315e2bf-1c81-cc19-bfb3-a24978448a5e djaydev/ffmpeg-ccextractor ffmpeg -hwaccel nvdec -i /input/test.mkv -c:v hevc_nvenc -c:a copy /input/test1.mkv

     

     

    With or without the quotes like you have in your initial command?

     

    EDIT:  Nvmd, it worked without the quotes.  I guess I needed to remove them in my above command?

  4. On 7/8/2019 at 10:41 AM, dee31797 said:

    While not as easy as Handbrake, you can decode and encode with your GTX 750 on unraid with ffmpeg.

     

    
    docker run --rm --runtime=nvidia -v "/unraid/your/videos:/input:rw" -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-UUID-from-nvidia-pluginXXXX' djaydev/ffmpeg-ccextractor \
    ffmpeg -hwaccel nvdec -i "/input/oldvideo.ts" -c:v hevc_nvenc -c:a copy "/input/newvideo.mp4"

    Edit, I don't know if GTX 750 has nvdec components or not

     

    Running this command on my system gives me the following error.  Any ideas?

     

    docker run --rm --runtime=nvidia -v "/mnt/user/Downloads/handbrake/watch:/input:rw" -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-90490eba-cd03-39f1-d641-3360df982f5a' djaydev/ffmpeg-ccextractor \
    > ffmpeg -hwaccel nvdec -i "/input/test.mkv" -c:v hevc_nvenc -c:a copy "/input/test1.mvk"
    docker: invalid reference format.
    See 'docker run --help'.

     

  5. I've got a pretty beefy server with plenty of CPU/RAM/room for GPUs) to spare.  I'd love to be able to add 1-2 zero/thin clients to bedrooms in my home to be used for basic web browsing, word processing, Youtube, etc.  Is anyone doing this using nothing but a zero/thin client connected via Cat6?

  6. 1 minute ago, AgentXXL said:

    I've experienced this as well. I've tried using a SSD as an unassigned device for Plex, and also on my SSD cache drive within unRAID. I do believe that part of the problem is the Plex clients themselves and will be reporting this to the Plex support forums shortly. I've found that  even with Mover tuning priority set to Very Low and disk I/O priority set to Idle, higher bitrate media still has playback issues and will often get stuck 'buffering'.

     

    I believe part of the issue is that the Plex client on most smart TVs have limited RAM to work with. The Plex for LG WebOS client is a prime example of a Plex client that fails often. And surprisingly, even the Plex for Apple TV client often buffers for high-bitrate titles. It makes less sense for the Apple TV 4K as I have a 64GB model and it has plenty of unused storage according to the list of apps under Settings. However the 3rd party Infuse Pro client on my ATV4K rarely experiences a hiccup.

     

    My HTPC using either VLC or Zoomplayer also rarely experience the buffering that seems to plague the official Plex clients. I've even played around with CPU pinning with Plex getting 3 out 4 of my hyper-threaded cores (6 threads total) and all other unRAID processes with the 1st core (2 threads). I am planning a hardware upgrade eventually to a system with more CPU cores, but in reality it still seems that the Mover process and disk I/O priority take too many resources to allow for optimal media playback.

     

    At least I have work-arounds by using Infuse or my HTPC.

     

     

    The mover is absolutely going to affect cache since it's going to be doing reads off of cache.  So if you have trying to play a 4K remux off cache at the same time the mover is reading from cache I'm not surprised you are having issues.

  7. 4 minutes ago, hawihoney said:

    We have tons of self written scripts that create, manipulate and extract databases (MariaDB and SQLite). Many of them running automatically from within Unraid User Scripts. Some PHP, some Perl, some bash, ...

     

    There's for example a 30GB SQLite database that simply holds personal names and their relations.

     

    MariaDB is running here as well, but for some jobs it's not fast enough. Running that same, identical 30GB database on MariaDB was a pain - slow as hell.

     

    We thought it would be a good idea to put everything from Windows onto the Unraid server and use the infrastructure of plugins, dockers and VMs. I didn't expect to fall into such a hole. 40 years of software development and I'm still learning.

     

    For me it seems that there's much manual activity envolved when maintaining a plugin such Unraid NVIDIA. I was under the impression that these tasks are mainly automated. As I said. I'm still learning.

     

    Here's one of the dumps that is failing since 6.7.1. Boom, without notice.

     

    
    echo ".dump" | sqlite3 /mnt/cache/system/appdata/SQLite/Similar/similar.db > /mnt/user/Data/sqlite_backup/Similar/dump.sql

     

     

    If I'm reading this post correctly, you are running Unraid in some type of production environment?  If that's the case, why in the world would you be running a 3rd party non-officially supported OS version?  That just can't happen in a production environment IMO.

  8. 28 minutes ago, hawihoney said:

    As a user of Unraid NVIDIA _and_ tools using SQLite the delayed 6.7.2 Unraid NVIDIA release is a real problem here. We can't change back to stock Unraid. On the other side some important SQLite based tools don't work any longer. In fact it has bitten us because after applying 6.7.1 SQLite tools and SQLite dumps did overwrite backups with empty files. We simply did not expect that somebody would remove a tool like SQLite from Unraid.

     

    Now Unraid 6.7.2 is out and SQLite is back - but not for us. We have to wait for the Unraid NVIDIA 6.7.2 release. Going back to 6.7.0 without these additional security patches is no option either.

     

    So now we have lot of time to change our own SQLite tools to check for SQLite in Unraid before dumping data or whatever. New data is not coming into the house - so everything cool, no?

     

    Just some other 0.02 USD.

     

     

    I understand your frustration.  However, when you use a 3rd party plugin/application that is not part of the base OS and thus not maintained by that base OS development team, these things can and will happen from time to time.  It's just part of the knowledge you must have when you use a 3rd party plugin like this.  I realize that doesn't take away your frustration, but your post comes off to the LSIO team (and mainly @CHBMB) as you diminishing the fact that he is spending his free time to do this work.  If you really don't want to be in this kind of situation again, you should configure your server in a way that will not require the use of 3rd party supported tools (ie. use a CPU with iGPU or just don't use HW transcoding at all in Unraid).

    • Upvote 1
  9. Has anyone successfully gotten the inputs.nvidia_smi plugin working in telegraf?  Even with the --runtime=nvidia extra paramater and the NVIDIA_DRIVER_CAPABILITIES/NVIDIA_VISIBLE_DEVICES variables I can not get it to work.  I just get:

     

    Quote

     

    Error in plugin: fork/exec /usr/bin/nvidia-smi: no such file or directory

     

    Yet inside the container I can see nvidia-smi in /usr/bin.

     

     

    EDIT:  Turned out to be the alpine repo.  Changed to latest and now it works.

    • Thanks 1
  10. Has anyone successfully gotten telegraf working with inputs.nvidia-smi?  Even adding all the nvidia variables to the container I still get the following error in my telegraf logs:

     

    Quote

    [inputs.nvidia_smi]: Error in plugin: fork/exec /usr/bin/nvidia-smi: no such file or directory

     

  11. 1 hour ago, jbrodriguez said:

    It doesn't 'balance' data across drives.

     

    Scatter will use all the free space on the drive with the most available space, if there's still content left to move, uses all the free space from the drive with the second most available space and so on.

     

    Ok so even trying to do multiple scatter jobs doesn't seem to work.  The behavior you describe above is not happening.  No matter what, the data is getting moved to disk8 even though it does not have the most free space.  So I'm not sure if I'm doing something wrong or if Unbalance will just always start with the first chosen disk regardless of free space available.

  12. 44 minutes ago, jbrodriguez said:

    It doesn't 'balance' data across drives.

     

    Scatter will use all the free space on the drive with the most available space, if there's still content left to move, uses all the free space from the drive with the second most available space and so on.

     

    I see.  So if i wanted to move 7.5TB off one drive and scatter it across 8 new drives, I'd have to select subfolders and do a different move job for each subfolder?

  13. I'm trying to do a scatter by moving a directory on disk1 to multiple disks (disks8-15, all empty).  However, when I start the move it only moves the files to the first disk (disk8) in the group of disks I'm trying to move the data to.  What am I missing?

  14. 5 minutes ago, chad4800 said:

    I typically don't have any issues streaming while Plex is scanning/updating my library either. I haven't noticed any high CPU, RAM, or Disk I/O utilization. I'll have to look more closely at the CPU I/O wait, and see if that's happening with me as well. 

     

    It's not Plex that causes high CPU, RAM, or Disk I/O.  It's the mover process.  I have mitigated this for the time being by scheduling my mover to only run once a day at 5am.  But now that I've moved into doing a lot of 4K, sometimes i need to manually run the mover during the day and I have to hope no one is using Plex.

    • Like 1
  15. 2 hours ago, ProZac said:

    Just a quick sidenote, not sure if you have looked into this, what is your plex set to do when files in the database is changed? I assume you have set plex to scan it's library? Might it be that plex is starting to scan the files as this can take alot of resource and it might be plex itself that stalls and not the server?

     

    Good thought but in my testing this was not the case.  Yes Plex scans on library changes but with all the resources I have at my disposal the scan is done in less than 30 seconds.  The issue seems to be CPU IOWait.  Watching netdata during the mover process IOwait jumps up significantly and I don't understand why.

    • Like 1
×
×
  • Create New...