Jump to content

KYThrill

Members
  • Posts

    306
  • Joined

  • Last visited

Posts posted by KYThrill

  1. On 1/17/2023 at 9:32 PM, emb531 said:

    Since the update on 1/14 Recycle Bin is not working correctly, files are being deleted immediately and not being sent to the Recycle Bin.  I can see the parent folder in the Recycle Bin (Movies) but the actual folder/file that was deleted is not present.  Logging is working correctly and shows the unlinkat commands.  I have restarted the plugin but same issue.

    I am having a similar problem.  I was using an old version of the plug in from December today, cleaning up old files.  Everything was working fine.

    Fix Common Problems kept bugging me with a pop up, and it was because I had several plugins out of date.  Recycle Bin was one of them.  

    After the update to the 2/10 version, Recycle Bin doesn't seem to work at all.  No entries are being made in the log file.  If I delete a file (anything from 200 KB to 5 GB I've tried) it is immediately deleted.  I can watch the available space on the drive increase instantly.  Before it didn't increase unless I emptied the bin.

    I tried uninstalling and reinstalling, but that did nothing.  Which I wasn't able to reboot between uninstall and reinstall, because I am 8 hours into a 24 hour preclear.  

    I will be able to do more tomorrow after that is finished.

    But updating from a December build to 2/10 build seems to have completely broken Recycle Bin for me.

    Also, I tried deleting the .Recycle.Bin folder in one of my shares and then deleting a file out of it.  It did not recreate the folder.

  2. Well, I think I figured out the issue, and it has nothing to do with the docker.  I tested the network connection up to 75 Mbs connection from the server to player, so no reason to think it is a network bottleneck.  The max possible bit rate for any type of Blu-ray is 54 Mbs.

     

    But what is happening is that Plex is direct streaming the video (no transcode), but it is transcoding the audio on the fly.  It doesn't appear that my Sempron's have enough juice to handle the audio transcoding.  Most of these Blu-ray rips have DTS-HD or TrueHD 5.1/7.1 audio.  Both players are just connected to 2.0 audio devices, so Plex is trying to transcode before streaming and then probably trying to do some sort of sync betwen direct video and transcoded audio.

     

    However, the Dune and Kodi accept the full audio stream, and their hardware would mux down to 2.0 to pass to the TV.  So the server doesn't need to handle this task.

     

     I guess unless I up the CPU's or add a GPU to the system to help transcoding, Plex is out as an option.  I guess I can investigate Emby...  Apparently you can disable transcoding on Emby.

  3. This is actually a question about the Plexinc docker for Plex Media Server, but I can't find a support topic for that version, just this one.

     

    I wanted to know if there are any settings that would reduce buffering and micro-stuttering?  I think I have it set to play the original stream, and the info on playback says it is original (1080p blu-ray rips to MKV).  So no transcoding or anything like that happening.

     

    For background, I've been running multiple unRAID servers for 10 years now.  They feed Dune Media Players and I have really had zero complaints using those for a decade now.  But Dune is pretty much obsolete and DFI is unsupported.  I've had to make several patches now to DFI code just to keep API's scraping.

     

    So I want to move to something more modern.  So as a test case, I have setup my PC and one Nvidia Shield 4K with both Plex and Kodi.

     

    I installed Plex Media Server on my unRAID via docker.  The two Kodi installs are just using separate local indexes for the time being.  The two boxes are getting Plex data from the media server. Both use Gigabit Ethernet, not Wi-Fi.  All media files are on the same server as the Plex docker.

     

    My problem is that Plex buffers every minute or two, and has these 2-3 second microskips (playback just skips ahead) when it is playing. It probably buffers for nearly a full minute at the start of a movie, before even attempting to start playback.  Content is pretty much unwatchable.  However, the same content played back with Kodi is perfect.  The Kodi playback starts pretty much instantly (just the HDD spin up time, if required).

     

    This is happening on both the PC and the Shield TV.  I know too many people use Plex for this to be a problem for everyone.  Is Plex, even if not transcoding, heavily dependent on unRAID CPU?  For low power consumption, I use some lower end CPU's in unRAID (Sempron 145's).  But this has never been a problem for the Dune's and doesn't seem to matter for Kodi either.

     

    I will also add that memory consumption goes to 90% when playing back through Plex, with the media server running.  Hoevers at 81% with Plex running, but no stream playing.  With the server off, memory is about 50% used when playing a Kodi stream. 

     

    However, I also get flawless Kodi playback the Plex Media Server running (about 86% RAM usage).

     

    My playback issues seem to be a Plex problem, but I have no idea what it could be.  Can anyone help?

     

    Other than these playback issues, Plex is great for management, but I will have to toss it if playback is going to buffer.

     

     

  4. Yes, I put an old PCI Intel Pro 1000GT card in it that I had, and it has been running two days with 0 ethernet errors.

     

    I almost think it has to be the driver.  My second unRaid has been up for 6 months, and it has 5 HD security cameras that stream to it 24/7.  It has a Realtek 8168 and unRAID loads the Realtek driver for it.  It has had zero errrors with the Realtek.

     

    Both systems have the same low power CPU (Sempron 145) and same amount/brand/model of RAM. 

     

    I still get audio dropouts, but I've been charting them and I think I see a pattern emerging, but it is still too early and could be coincidence.  Since switch adapters, I've played six movies.  Two have had audio drops, both had TruHD audio tracks.  The dropouts happen randomly (never in the same place twice if you rewatch).  And both movies have been on Seagate drives in my unRAID.

     

    I had another thread where I noticed my Seagate drives all had incrementing RAW read errors and Seek errors.  But after researching, everyone said this is normal, as long as the error correction number likewise increments.  But maybe it isn't such an innocent behavior as everyone thinks.

     

    Another problem could be file corruption.  I played these movies back from disc, bit streaming through a PS4, and no dropouts (so I think my receiver is okay decoding and not a hardware error).  It could still be a failing DuneHD or something (but then you would think DTSMA would get streaming errors too).  So as soon as I get a chance, I am going to re-rip the offending films and overwrite them on the Dune to see if the error goes away.  It could be some kind of bit decay or errors from copying to migrate to XFS (even though all copies were CRC checked with the originals and passed when I migrated).

  5. Well,

     

    I just checked unRAID, and it actually reports that my Ethernet controller is a Nvidia MCP77, not a Realtek.

     

    But MSI's website states "Supports 10/100/1000 Fast Ethernet by Realtek 8211CL ".  It states the same in the manual.  I remember the box it came in also mention the 8211CL on the box.  So could unRAID 6X be detecting the wrong controller, and now I have the wrong driver?

     

    Or maybe MSI just made some undocumented change  to the MB and I have some V2 version of the MB and it was never documented?

  6. I too have been having some Dune HD problems lately.  I have 5 Dune HD's on my network, being served by unRAID.  Before going to V6 of unRAID, I could watch a movie on a Dune, and at the same time, initiate a file transfer to the same drive that was playing the movie.  Never a problem in the past 5 years doing this all the time.  Basically, my system has been trouble free, other than the time I had a memory module fail.

     

    But after going V6, if I try to do that, the movie that is playing will usually freeze.  So I've had to quit doing transfers when a movie is playing.  I don't know if this is all V6 or not.  When I went V6.1.3, I don't think I was having this problem.  But now I am on 6.1.9, and definitely have the problem.  But I also migrated from ReiserFS/6.1.3 to XFS/6.1.9, so I don't know if it could be a unRAID version change, or a change because of the file system?

     

    Also, when I was on V5.X, I never had any network errors.  But now that I am on 6.1.9, I get millions of RX overruns showing on my NIC (and I rebooted less than a week ago).  It is a Realtek 8211CL, which unRAID loads the forcedeth .64 driver for it.  I have a second unRAID (also 6.1.9), which uses a Realtek 8168, and it has zero RX errors in over 6 months of uptime (and it uses the r8169 realtek driver).  I don't really know when these RX errors started, when I first went 6, or most recently just with 6.1.9.

     

    Recently, I've been noticing audio dropouts on high bit rate streams (blu-ray rips w/ TrueHD) that I never had before, I movies I've watched dozens of times before.  So I started watching, and everytime there is an audio dropout, the RX overrun counter increments. 

     

    So this is starting to become very frustrating.  I don't know if rolling back to 6.1.3 or 6.1.8 would help any.  I really don't want to go back to 5.X.  Or is the the XFS file system?  Or would buying a new NIC work?

  7. I get a bunch of errors when I try to install this plugin in 6.1.7.

     

    Warning: simplexml_load_file(): /tmp/plugins/recycle.bin.plg:1: parser error : Document is empty in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193

     

    Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193

     

    Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193

     

    Warning: simplexml_load_file(): /tmp/plugins/recycle.bin.plg:1: parser error : Start tag expected, '<' not found in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193

     

    Warning: simplexml_load_file(): in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193

     

    Warning: simplexml_load_file(): ^ in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 193

    plugin: xml parse error

     

    I don't have a dynamix.plugin.manager folder/ subfolders on my unRAID.  Even if I manually create these folders, I still get errors.  Any idea what gives?

  8. ... The iStar and Icy Dock for me just don't cut it in the end...

     

    Can you elaborate a bit on what make you feel this way?

     

    Why certainly good sir. :)

     

    ICY Dock issues where mostly around what I perceived to be cheap drive cages. To me, this did not justify the cost. Right now it is $25 more then the next two cheapest, the Norco and SM units. That along with what lookd like not the greatest airflow in the rear made me less than thrilled. Once I plugged in my Corsair Molex cables and then tried to plug in SATA cables, quickly realized that wouldn't work. Many of the modular Molex cables come with those tabs on them (here) and those won't work. Icy Dock layout places some of the SATA plugs right below the power plug. No other cage had this design. So have to go find a splitter... Don't get me started about the fan either. :)

     

    iStarUSA - cheapest of the bunch, does have that going for it. But on the model I have at least, zero airflow openings in the front. Be really concerned with heat. I cannot test right now as I have already had to RMA the unit due to tray5 not working. I was turned off on the design of the locking mechanism for the drive cages. It seemed like the slightest pressure on them would pop it open, forcing me to use the itty bitty locking mechanism. Which is probably there because they know they pop open so easily.  ::) Unlike the Norco and SM, another custom fan connector on this one as well.

     

    After working with all four, I can say these two are in the bottom of the pack. But, this is my opinion. :) I really liked working with the Norco and SM models. Now, at $95 you cannot beat the price for the iStarUSA unit and both it and the Icy Dock will more than meet most people needs. But, personal preferences and all, I would go for the other two.

     

    Shawn

     

    I use Icy Dock cages and power supplies with the type of connector you linked.  I have never had any problem getting them to work with the Icy Dock 5x3. 

  9. I have used preclear 1.11 to clear two drives now.  On both, it hasn't showed any preclear status on MyMain.  Is there some configuration that I'm missing, or should it automatically get displayed in MyMain?

    Did you update unmenu to use the new MyMain?

    It should show:

    07-unmenu-mymain.awk: 1.53 - changes for myMain 3-10-11 release, contributed by bjp999 - 5.0b6 support - Revision: 223

    when you click on the "about" link.

     

    If you did, there is already one other person reporting it is not displaying their disk's pre-clear status.  Might be the same bug.

     

    Joe L.

     

    Joe,

     

    My unMenu install just shows version 1.5, not 1.53.  Where can I download MyMain 1.53.  I thought it was all integrated into unMenu, but I have installed the latest unMenu package from your Google site and it only installs version 1.5 of MyMain.

  10. In regards to the parity sync errors, I too have pulled my hair out trying to troubleshoot some, and have never been able too.  Memtest is clean, ran it for 72 hours.  Drives were initially all fine when I went through  this (one has since had reallocation errors, but is fixed, and the problem still persists).  Three preclears per drive with zero errors, and clean SMART.

     

    Basically, I have seven data drives in my array + parity.  Every time I run a parity check, unRAID finds 7 parity errors in the same location every time.  I repair the error.  On the next check, it finds the same 7 errors on the same sectors.  I have four drives on the MB and three on a Supermicro.  I have went through backplanes, and without backplanes.  I've moved cables around, probably three different cables tried on each drive (only two on the Supermicro breakout cables).  I've swapped out 1TB drives for 2TB drives, rebuilt, and the same error appears on the same spot on the new 2TB drives.  I actually started getting this error with 6 data drives and got six errors.  I added a seventh drive, and on the first parity check afterwards, it went to seven errors.

     

    The only conclusion I could come to was that it was some problem on the Gigabyte motherboard, but what that could be, I have no idea (and no way to fix it short of replacing the MB, CPU, and RAM).  The network adapter on the MB did flake out (a gigabit adapter that stopped negotiating gigabit, even when forced), so maybe the disk controller is flaky too. The errors don't seem to effect anything, so it doesn't bother me in this temporary system.

     

    I am collecting the parts for a 20 drive build (vs 12 drive in this build) in a Norco 4220 so I can rackmount.  I have a new MSI MB and Sempron 145, so I will be curious to see if these errors disappear with the new build.

  11. I just pre-cleared a a new drive using the -A option.

     

    Where on the command line did you put the "-A" option?

     

    It should have been like this:

    preclear_disk.sh -A /dev/sdX

    and not like this:

    preclear_disk.sh /dev/sdX -A

     

    It converted easily enough using "-C 64", but I'm curious too.  Please keep me informed of the results on the next drive.

     

    The switch was in the right place.  The only thing I can think of is if I entered an extra space before or after the switch by accident.  But then I would think that if it ignored the switch, it would have ignored the dev entry too, and the command wouldn't have ran at all.

     

    The pre-clear did do one thing weird.  Everything indicated that the pre-clear ran correctly.  However, at the end, it normally goes back to the command prompt.  However, I did this via telnet and it never showed the command prompt at the end of the pre-clear.  So I had to close telnet and reopen.  That is partly why I ran the -t option, to check and make sure the drive had precleared.

  12. I just pre-cleared a a new drive using the -A option.  I thought this was supposed to pre-clear with a starting sector at 64.  However, when I rean pre_clear with the -t option, it reports the drive is precleared with a starting sector of 63.

     

    What exactly did I do wrong?  Is -A not the right switch?

    which version of the preclear script did you use?

     

    post the output of

    fdisk -lu /dev/sdX

     

    (where sdX = your disk)

     

    I used capital -A as instructed in the original post.

     

    root@Tower:~# fdisk -lu /dev/sdf

     

    Disk /dev/sdf: 2000.3 GB, 2000398934016 bytes

    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Disk identifier: 0x00000000

     

      Device Boot      Start        End      Blocks  Id  System

    /dev/sdf1              63  3907029167  1953514552+  0  Empty

    Partition 1 does not end on cylinder boundary.

     

    Which the -C 64 option did convert it, so no big deal I guess.  But strange that it didn't do it.  I have an EARS drive to do next, so I'll what happens the second time around.

     

    PreClear unRAID Disk /dev/sdf

    ################################################################## 1.9

    Device Model:    WDC WD2001FASS-00U0B0

    Serial Number:    WD-WMAUR0293305

    Firmware Version: 01.00101

    User Capacity:    2,000,398,934,016 bytes

    ########################################################################

    Converting existing pre-cleared disk to start partition on sector 64

    ========================================================================1.9

    Step 1. Verifying existing pre-clear signature prior to conversion.  DONE

    Step 2. converting existing pre-clear signature:  DONE

    ========================================================================1.9

    ==

    == Conversion complete.

    == DISK /dev/sdf is now PRECLEARED with a starting sector of 64

    ==

    ============================================================================

    root@Tower:/boot#

     

  13. Another possible means of determining if you have HPA is manually issuing the following command. This will probe all of your drives, including your USB Flash drive, so it's natural to see an HDIO_DRIVE_CMD failed for that drive. This takes the guesswork out of knowing what your drives are.

     

    hdparm -N /dev/[hs]d[a-z]

     

    /dev/sda:
    max sectors   = 312581808/312581808, HPA is disabled
    
    /dev/sdb:
    max sectors   = 3907029168/3907029168, HPA is disabled
    
    /dev/sdc:
    max sectors   = 3907029168/3907029168, HPA is disabled
    
    /dev/sdd:
    max sectors   = 3907029168/3907029168, HPA is disabled
    
    /dev/sde:
    max sectors   = 3907029168/3907029168, HPA is disabled
    
    /dev/sdf:
    HDIO_DRIVE_CMD(identify) failed: Invalid argument
    
    /dev/sdg:
    max sectors   = 3907029168/3907029168, HPA is disabled
    
    /dev/sdh:
    max sectors   = 3907029168/3907029168, HPA is disabled
    

     

     

    I tried this method but I'm getting crazy results...

     

    I can't copy and paste my telnet window, but every drive says HPA is invalid and gives something like this:

     

    /dev/sdb

    max sectors =18446744072344861488/11041584, HPA setting seems invalid

     

    Four of the discs in my array return that identical result, even though two are 2TB drives, one is 1.5TB and one is 1TB.  First I think that sector count seems invalid, and two, it shouldn't be the same for all four discs.

     

    I've got three other discs that return an equally puzzling result, but gives a tiny sector count despite being on 1 and 2 TB drives.

     

    UPDATE:

     

    I updated to version 4.6 of unRAID.  I ran the same command and got a different result that still seems wrong:

     

    /dev/sdb

    max sectors =3907029168/14715056(18446744072344861488?), HPA setting seems invalid (buggy kernel device driver?)

     

    A check of my syslog shows no mention of HPA and reports the sector count of this drive as 3907029168, which is correct.

     

     

     

     

     

     

     

×
×
  • Create New...