Jump to content

CHBMB

Community Developer
  • Content Count

    10456
  • Joined

  • Last visited

  • Days Won

    38

Posts posted by CHBMB


  1. Just read the work that went into releasing the latest version. Thank you to all that are involved in keeping this amazing addition to unraid working. I've been enjoying the benefits of it ever since it was released and as we all know how great it is to have this ability. I really appreciate it.

    I searched back a bit and didn't see anything recently about the work being done on combining the Nvidia build and the DVB build. That would be the next dream come true. Is there any place I can follow the development of that build? Seems like it's getting lost in this thread.
    Yeah, I'm just too tied up with work at the moment, I'll try and pick that up in the New Year

    Sent from my Mi A1 using Tapatalk


  2. Just now, Coolsaber57 said:

    Thank you, I had that in there before in earlier testing and it still wasn't working, and I must have forgotten to add it back in this time.  Adding it back in does allow it to see the encoders/decoders list (hooray!).

     

    Now I have to figure out why it's failing to work in binhex-emby.  It does appear to be an older version, so I've contacted binhex to see if he can update the container to the latest to see if that resolves the issue.

     

    Thank you for your help!

    It's not just to do with the Emby version, there are added dependencies that we add into our image.


  3. 45 minutes ago, Coolsaber57 said:

    Sure! 

    image.thumb.png.971a5ad411708df96809fd09bc174603.png

     

    Docker run command:

    
    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-emby' --net='proxynet' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -e 'NVIDIA_DRIVER_CAPABILITIES'='all' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-8440d19a-9e76-d65d-446d-ae1025166862' -p '8096:8096/tcp' -v '/mnt/user/Multimedia/':'/media':'rw' -v '/mnt/disks/apps/appdata/binhex-emby':'/config':'rw,slave' --runtime=nvidia 'binhex/arch-emby'
    
    d4bc81427e6cc561e1c829a5f6c54d37663f0133a444cc11d2b89cb0f861a6cd

     

     

    Can you try the LinuxServer version of Emby please?  I am 100% sure that works as it's the combination I use.


  4. 3 hours ago, Coolsaber57 said:

     

    Nope, I do have that card in a VM template to be passed through, but it is off and the server has since been rebooted.  I even tried to remove it from the template in case that was it, but same result.

     

    Not sure how to check if i'm booted to GUI mode, but I don't think so.

    So my Emby trancoding settings look like this.  Do you have Emby Premiere?

     

    2019-12-04_00-45.thumb.png.652ce237cbad4464952ac2a1cf331de9.png

    • Thanks 1

  5. 53 minutes ago, CSIG1001 said:

    tell what unraid cant do that a NAS can do from synology?

    From someone that tries to support docker on both Unraid and Synology, my overwhelming impression is I wouldn't personally touch a Synology with a 10 foot barge pole.  Closed system, weird docker implementation which we see lots of issues with that we can't seem to get to the bottom of.

    • Thanks 1

  6. Just now, SJOWG said:

    I'm sure you're telling me something important, but I'm not getting it. Are you saying I'm in the wrong thread? Or that I've configured my system incorrectly? Something else?

    This is the support thread for the LinuxServer version of Plex, the run command you posted is for the LimeTech version of Plex.

     

    Depending on how you look at it, you're either in the wrong thread (Use the LimeTech support thread) or you've configured your machine incorrectly (You should be using our version)


  7. 4 hours ago, leejbarker said:

    Got my 1650 Super and prior to this release (of the Unraid / NVIDIA Drivers) I couldn't get it to pass through to my VM.

     

    Since this driver release, the card should be supported by the NVIDIA driver. However, under the Info Panel, GPU Model and Bus it just shows as a Graphics Device. I know on my previous card, it had the model number etc.

     

    Is something wrong here? I'm going to pull the card out tomorrow and try it in my baremetal windows PC, as it might be a bad card.

     

    3 hours ago, leejbarker said:

    Further to this, it does seem to be hardware transcoding fine!

     

    2 hours ago, saarg said:

    You don't need this build to pass through the card to a VM and it doesn't have anything to do with what it is recognized as in Linux. It is the system devices list you are talking about?

    I think it's working fine now @saarg although the more I read it the more I'm getting confused.....

     

     


  8. 56 minutes ago, SJOWG said:

    This is my command:

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='PlexMediaServer' --net='host' --log-opt max-size='50m' --log-opt max-file='1' --privileged=true -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -v '/mnt/user/appdata/PlexMediaServer':'/config':'rw' 'limetech/plex'

    e98fb6545baa9978e101367ab338143fdaf2d9eaca29d655b83f0eb3fe5286c5

    More to the point, you're not even using the LinuxServer container  'limetech/plex'

    • Haha 1

  9. This was an interesting one, builds completed and looked fine, but wouldn't boot, which was where the fun began.

     

    Initially I thought it was just because we were still using GCC v8 and LT had moved to GCC v9, alas that wasn't the case.

     

    After examining all the bits and watching the builds I tried to boot with all the Nvidia files but using a stock bzroot, which worked.

     

    So then tried to unpack and repack a stock bzroot, which also reproduced the error.  And interestingly the repackaged stock bzroot was about 15mb bigger.

     

    Asked LT if anything had changed, as we were still using the same commands as we were when I started this back in ~June 2018.  Tom denied anything had changed their end recently.  Just told us they were using xz --check=crc32 --x86 --lzma2=preset=9 to pack bzroot with.

     

    So changed the packaging to use that for compression, still wouldn't work. 

     

    At one point I had a repack that worked, but when I tried a build again, I couldn't reproduce it, which induced a lot of head scratching and I assumed my version control of the changes I was making must have been messed up, but damned if I could reproduce a working build, both @bass_rock and me were trying to get something working with no luck.

     

    Ended up going down a rabbit hole of analysing bzroot with binwalk, and became fairly confident that the microcode prepended to the bzroot file was good, and it must be the actual packaging of the root filesystem that was the error.

     

    We focused in on the two lines relevant the problem being LT had given us the parameter to pack with, but that is receiving an input from cpio so can't be fully presumed to be good, and we still couldn't ascertain that the actual unpack was valid, although it looked to give us a complete root filesystem.  Yesterday @bass_rock and I were both running "repack" tests on a stock bzroot to try and get that working, confident that if we could do that the issue would be solved.  Him on one side of the pond and me on the other..... changing a parameter at a time and discussing it over Discord.  Once again managed to generate a working bzroot file, but tested the same script again and it failed.  Got to admit that confused the hell out of me.....

     

    Had to go to the shops to pick up some stuff, which gave me a good hour in the car to think about things and I had a thought, I did a lot of initial repacking on my laptop rather than via an ssh connection to an Unraid VM, and I wondered if that may have been the reason I couldn't reproduce the working repack.  Reason being, tab completion on my Ubuntu based laptop means I have to prepend any script with ./ whereas on Unraid I can just enter the first two letters of the script name and tab complete will work, obviously I will always take the easiest option.  I asked myself if the working build I'd got earlier was failing because it was dependent on being run using ./ and perhaps I'd run it like that on the occasions it had worked.

     

    Chatted to bass_rock about it and he kicked off a repackaging of stock bzroot build with --no-absolute-filenames removed from the cpio bit and it worked, we can only assume something must have changed LT side at some point.

     

    To put it into context this cpio snippet we've been using since at least 2014/5 or whenever I started with the DVB builds.

     

    The scripts to create a Nvidia build are over 800 lines long (not including the scripts we pull in from Slackbuilds) and we had to change 2 of them........

    There are 89 core dependencies, which occasionally change with an extra one added or a version update of one of these breaks things.


    I got a working Nvidia build last night and was testing it for 24 hours then woke up to find FML Slackbuilds have updated the driver since.  Have run a build again, and it boots in my VM.  Need to test transcoding on bare metal but I can't do that as my daughter is watching a movie, so it'll have to wait until either she goes for a nap or the movie finishes.

     

     Just thought I'd give some background for context, please remember all the plugin and docker container authors on here do this in our free time, people like us, Squid, dlandon, bonienl et al put a huge amount of work in, and we do the best we can.

     

    Quote

    Where it at yo? 😈

     

    Quote

    Yes please. I am getting tired of the constant reminders to upgrade to RC7. Cant because my PLEX server will lost HW Transcoding.

     

    Comments like this are not helpful, nor appreciated, so please read the above to find out, and get some insight into why you had to endure the "exhaustion" of constant reminders to upgrade to RC7. 

     

    On 11/27/2019 at 8:10 PM, leejbarker said:

    Hi,

     

    Completely understand you guys do this in your spare time and I really am thankful for your work so far.

     

    I've got a 1650 super recently and I'm just wondering when we might see a driver update...

     

    Thanks

    On 11/29/2019 at 9:30 AM, the1poet said:

    Hi aptalca. Appreciate the work the team does.

     

    Comments like this are welcome and make me happy..... :D

     

     

    EDT: Tested and working, uploading soon.

    2019-12-01_09-40.png.b65cc29154f794c531bb1287329270cc.png

     

     

    • Like 3
    • Thanks 9
    • Haha 1

  10. I"m sorry, I'm a complete noob, and I'm completely lost. Please excuse my ignorance, egregious though it may be.
    I'm trying to enable the Plex Media Server. I created a new Docker container, and walked through the (dated) video as best I can.
    I used the unRaid cheat sheet at https://ronnieroller.com/unraid#setup-notes_plex-media-server to set the container path.
    Plex runs, but it doesn't see my Media share. When I go to Add Library, and then Add Folder, I can get to /mnt, but it doesn't show any subfolders--I can't see /mnt/user, let alone any of the shares underneath that. 
    I read the unRaid Docker Guide at https://lime-technology.com/wp/docker-guide/, and frankly can't understand a damn thing it's saying.
    If there's anything resembling a manual, or instructions, I'd greatly appreciate someone pointing me in the right direction. I'd gladly RTFM if I could find the FM.
    Post your docker run command, link to this is in my signature, you may need to enable signatures on your forum setting.

    Sent from my Mi A1 using Tapatalk


  11. On 11/26/2019 at 1:09 PM, wreave said:

    The Realtek RTL8125 (driver added in 6.8.0-rc2) does not work on Nvidia Unraid 6.8.0-rc5.

     

    I have a full break down of tests and diagnostics over on this thread. Any insight here would be appreciated as I really would like to be able to continue using the Nvidia builds. 

     

    The summary is that the RTL8125 works on both stock Unraid 6.8.0-rc5 and rc7 but it fails to work on Nvidia Unraid 6.8.0-rc5.

    I think this is resolved, we just need to be able to make the actual Nvidia build work now.  No ETA

    • Like 4