Jump to content

fr0stbyt3

Members
  • Posts

    27
  • Joined

  • Last visited

Posts posted by fr0stbyt3

  1. I have a ASUS Pro WS WRX80E-Sage that has onboard dual intel 10gb ethernet. With a fresh install of unraid, I get a crash - even in safe mode - after the line "ixgbe-mdio: probed".

     

    I'm guessing this is a driver fault of some kind, but I don't know where to look next since I can't even boot long enough to access anything. Any thoughts?

  2. On 5/9/2019 at 7:49 PM, interwebtech said:

    StarTech.com 12U Adjustable Depth Open Frame 4 Post Server Rack with Casters/Levelers and Cable Management Hooks 4POSTRACK12U Black
    2x StarTech.com 1U Adjustable Mounting Depth Vented Rack Mount Shelf
    CyberPower OR1500LCDRM1U 1U Rackmount UPS System
    NORCO 4U Rack Mount 24 x Hot-Swappable SATA/SAS 6G Drive Bays Server Rack mount RPC-4224
    EVGA Supernova 850 G3, 80 Plus Gold 850W Modular Power Supply 220-G3-0850-X1
    ASRock EP2C612 WS Motherboard
    2x Intel Xeon E5-2620 v3 Six-Core Haswell Processor 2.4GHz LGA 2011-3 CPU
    2x Intel LGA 2011-3 Cooling Fan/Heatsink
    8x Crucial 8GB Single DDR4 2133 MT/s (PC4-2133) CL15 SR x4 ECC Registered DIMM CT8G4RFS4213 (64GB)
    2x Samsung 970 EVO 1TB - NVMe PCIe M.2 2280 SSD (MZ-V7E1T0BW)
    2x QNINE M.2 NVME SSD to PCIe adapter
    LSI Logic LSI00244 SAS 9201-16i 16Port 6Gb/s SAS/SATA Controller Card
    1x NORCO Computer Parallel (reverse breakout) Cable (C-SFF8087-4S)
    4x 10Gtek Internal Mini SAS SFF-8087 Cable, 0.5 Meter
    2x Gigabit network adapters bonding to single interface
    Unraid OS Pro 6.x
    1TB RAID1 Cache Pool
    2x 8TB parity
    15x HDD array @100TB

    IMG_20190509_164329.jpg

    I dig your case choice :). You should show us what it looks like inside.

  3. 9 hours ago, Xaero said:

    Also know that the encode process is heavily impacted by read and write performance. I'm not sure how nvdec handles it's buffer queueing, but if the buffer isn't filled with enough data, you will notice the video will stop playing. This is READ limited performance, and would be heavily impacted by a parity check, especially for high bitrate media.

     

    The nvenc side of the house is limited by how much data is being fed into it by the decoder, and the write speed of the destination media. If you are transcoding to tmpfs (ram) this will almost never be your bottleneck as the encoded media is typically much smaller and lower bitrate than the source media.

    Plex has its own SSD. I use the same SSD for transcoding. The data is read from the array like normal. Could it be as simple as I was saturating the SSD?

  4. 1 hour ago, Xaero said:

    I'd also suggest using the copy on the gist I linked:
    https://gist.github.com/Xaero252/9f81593e4a5e6825c045686d685e2428

    It checks for a couple of things that could happen now. Like if you start a VM with the card passed through with the old version, all transcodes would stop working until the VM was stopped. Now it will simply fall back on CPU decoding. It also ignores files that use the mpeg4 AVI container, as they have problems with the ffmpeg build used by plex thus far.

    And yeah, force update, or changing any property of the docker will use a fresh copy of the docker, making this very easy to rollback from.

    Thank you. Again, I appreciate you guys. 

     

    Any idea why parity check would cause issues with this?

  5. On 2/27/2019 at 9:55 PM, Xaero said:
    
    #!/bin/bash
    
    #This should always return the name of the docker container running plex - assuming a single plex docker on the system.
    con="`docker ps --format "{{.Names}}" | grep -i plex`"
    
    echo ""
    echo "<b>Applying hardware decode patch...</b>"
    echo "<hr>"
    
    #Check to see if Plex Transcoder2 Exists first.
    exists=`docker exec -i $con stat "/usr/lib/plexmediaserver/Plex Transcoder2" >/dev/null 2>&1; echo $?`
    
    if [ $exists -eq 1 ]; then # If it doesn't, we run the clause below
    	docker exec -i $con mv "/usr/lib/plexmediaserver/Plex Transcoder" "/usr/lib/plexmediaserver/Plex Transcoder2"
    	docker exec -i $con /bin/sh -c 'printf "#!/bin/sh\nexec /usr/lib/plexmediaserver/Plex\ Transcoder2 -hwaccel nvdec "\""\$@"\""" > "/usr/lib/plexmediaserver/Plex Transcoder";'
    	docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder"
    	docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder2"
    	docker restart $con
    	echo ""
    	echo '<font color="green"><b>Done!</b></font>' #Green means go!
    else
    	echo ""
    	echo '<font color="red"><b>Patch already applied or invalid container!</b></font>' #Red means stop!
    fi

     

    Is there a quick script to reverse this? Not really a linux guy and I want to remove it for testing.

     

    Edit: Ok i'm dumb. Editing any property on the docker container seems to reset the patch.

  6. This might be completely unrelated, but I thought I would share:

     

    I have a p2000 and I added the patch for both decode and encode. When parity check started running, I was getting random failures in plex. Sometimes it would buffer endlessly. Sometimes it would start and then stop video. Remote users were unable to view libraries. However, the moment I stopped the parity check, everything went back to normal.

     

    I wouldn't report this here, but this setup has been running fine with the same users, devices, and with parity check every month without issues for 2 years. The only new things are the patch and the latest build of unraid.

     

    I hope this is helpful. Thank you so much for all your hard work.

  7. 1 hour ago, BrandonG777 said:

     

    I think it's possible to implement a variable/parameter that could be passed for DNS validation (similar to the HTTPVAL one)  and the user supplies the script for the provider via /config mount? Understandable you can't write a silver bullet script to handle hundreds if not thousands of DNS providers but maybe throw out an example for a couple of the big ones. Of course all of this is probably for naught because when I did DNS validation it told me it couldn't be automated and I should find another way to validate. Again, I'm no expert, just stumbling through this issue like all the other Cox/Comcast users.

     

    Cox customer as well. I have nothing important to add. Watching this thread closely hoping someone figures it out.

  8. 5 hours ago, binhex said:


    I've never seen it before but that's probably because I write to ssd cache, I would say ignore it for now and see if it occurs once your using cache drive again

     

    personally i would concentrate on getting your data migrated, get that 100% done, get the cache drive re-introduced and then start from the beginning with docker, you dont want to rush these things, small and steady steps is the way to do this.


    Sent from my SM-G900F using Tapatalk
     

     

     

    I ended up finding a thread on migrating your plex database to a new system. It ended up being so much less of a pain to deal with.

     

    After watching it for a while, it would start throwing those errors after 10+ hours of processing metadata. At that point, there were all kinds of issues. I couldn't fix matches anymore. Posters were missing. It was a mess.

     

    So then (not that I don't trust you) I downloaded the official docker. Same exact thing. It seems like bulk data import is the issue here.

  9. 3 hours ago, binhex said:


    Not that I know of, sounds like your drive is too busy to process the sqlite commits, are you writing to cache drive or array for /config ?

    Sent from my LG-V500 using Tapatalk
     

     

     

    Array drive. I disabled cache for now. Migrating to this server, so I don't want cache for a bulk copy. However, no copy was running when I was getting this. Is it normal for plex to show these errors when it's building up the database for the first time?

×
×
  • Create New...