Jump to content

TimV

Members
  • Posts

    42
  • Joined

  • Last visited

Posts posted by TimV

  1. I have a problem with binhex_plex not recognizing all my music files.   I'm running 2 Unraid systems and this appears to be only happening on system #2, but I'm afraid to remove and re-add for fear of them not coming back.

     

    Both systems running the latest version of Unraid (6.12) and binhex_plex docker.

     

    System #1 - Music was loaded over a year ago and seems to have no problems.   This would imply there is nothing wrong with the mp3 files themselves or the naming format.  It could be possible, Plex changed since I added the files, but I use the same naming convention for all my music so I wonder how it could be fine for some and not others.

     

    System #2 -   I noticed earlier this year some of my music was not listed in Plex.  I did the folder listing and the folder was present from within Plex.  No issues accessing the files from the OS.

       Here is a list of debugging steps I took,  all of which did not work.

       - I copied the files out of directory and then back in.  

       - I checked and updated the mp3 metadata.

       - Renamed Directory and mp3 files.

       - Various shut-downs of Plex docker and Unraid itself.

       - Created new Library with different path and copied files into it.

       - Installed the binhex plexpass docker and it wouldn't show up there.

       - Removed all plex dockers, did a rm -rf in the appdata directories and re-installed binhex-plex.

            - note:  this made it worse.  Now, only 5% of my music is showing up.   The scan files operation finishes really quickly.

     

    I'm at a loss as I've used Plex and these files (as is) for the last 6-7 years with no issues at all.  I thought nuking them and starting over would remove any corruption problems that might be occurring.  The scan files operation seems to be passing right over them.   

  2. Unassigned Disks (UD) shares are no longer showing up on MacOS (10.15.7) or WIndows 10.    This started before I upgraded from 6.11.1 to 6.11.5 so I think it's related to UD.   For the last year, I've had no problem with UD shares   (I have no problems seeing actual share directories) and then one day they magically disappeared.   I've rebooted both my mac  and unraid server multiple times.   I've gone in and unshared them, hit apply and then re-did the share.  My /etc/exports directory has them listed.   I think it's something related to UD because Windows can no longer see them , either so it's not just a Mac thing.     I have the latest version of UD which is dated for 3/3/23.

     

    Note: I have no problems accessing these disks from the Unraid server, so it looks like the mount is good, but the network component is hosed.

     

    SOLVED:  I guess 'enable SMB Security' got changed in the UD settings or functionality changed.  I set if from 'off' to 'Public' and my shares showed up again.   

  3. This may sound stupid, but how do I use this?  I don't see any instructions in the project other than how to get your GitHub token,  which I got.     I start the docker, get into the console and have no idea what to run to make this work.   I went thru the posts hoping I'd see a similar post, but to no avail.   Any guidance on getting started would be appreciated.   Thanks.

  4. I had a VM (Windows 10) I could connect to via RDP  with no problems and when I upgraded my motherboard, I'm no longer able to do so.    After reconfiguring the VM to start, I can only connect thru VNC.    The VM still has network and internet access.  but has a strange IP address.    Instead of having a 192.168.0.x IP, it has a 192.168.122.x address.    If I try to use that IP address, it will not connect to RDP.  When I view RDP settings in the VM, they are are unchanged from before.  It says I should be able to connect by its name  (Proteus).   Any suggestions as to why this is no longer working?      I went into device mangler and updated the network drivers, no dice.   I can mount devices that host on my unraid server, but am unable to mount any non-unraid network storage to the VM.     Any suggestions?    

  5. Everything was working fine, but after upgrade,  my box cannot access the internet.  I can login locally via network, but it can't communicate.

     

    OpenVPN reports DNS error,   my Fix Common Problems pops up saying i have DNS problem, but I can't find it.    Unraid itself cannot communicate back to GitHub.   i'm good on intranet activity, but not internet.  Attached is the output of my diagnostics run.    I'm about to restore back to 6.10.3.  I ran the pre-upgrade check and it passed on everything.

    neptune-diagnostics-20221016-1030.zip

  6. I'm trying to use AzireVPN.  I downloaded the config file, uploaded it to Wireguard.  When I switch to advanced mode, I see their server name in there.  VPN tunneled access is the type I'm trying to setup.  I can ping their server from the config screen.  When I activate it, the "last handshake" line shows inactive connection.  When I leave the screen and go back in, it's inactive.    I must be forgetting something fairly basic, yet I'm seeing anything in this thread,  It might be there and I just don't recognize it.

  7. I've got an old LG-NAS  (L2B2 or similar) that works fine with Windows and MacOS but I cannot mount on unRAID 6.8.3.    When going thru the GUI, it just does nothing when I try to mount it.  No logs, nothing so I opened up a shell window.

     

    When I try to mount from the command line, I get:

     

    root@Neptune:/mnt/remotes# mount -t nfs LG-NAS:/volume1_public /mnt/remotes/LG-NAS
    mount.nfs: requested NFS version or transport protocol is not supported

     

    So, I tried nfs version 3... same thing.

     

    root@Neptune:/mnt/remotes# mount -vvvv -t nfs -o vers=3 LG-NAS:/volume1_public /mnt/remotes/LG-NAS
    mount.nfs: timeout set for Thu Mar  4 09:15:53 2021
    mount.nfs: trying text-based options 'vers=3,addr=10.0.0.13'
    mount.nfs: prog 100003, trying vers=3, prot=6
    mount.nfs: portmap query retrying: RPC: Program not registered
    mount.nfs: prog 100003, trying vers=3, prot=17
    mount.nfs: portmap query failed: RPC: Program not registered
    mount.nfs: trying text-based options 'vers=3,addr=10.0.0.13'
    mount.nfs: prog 100003, trying vers=3, prot=6
    mount.nfs: portmap query retrying: RPC: Program not registered
    mount.nfs: prog 100003, trying vers=3, prot=17
    mount.nfs: portmap query failed: RPC: Program not registered
    mount.nfs: trying text-based options 'vers=3,addr=10.0.0.13'
    mount.nfs: prog 100003, trying vers=3, prot=6
    mount.nfs: portmap query retrying: RPC: Program not registered
    mount.nfs: prog 100003, trying vers=3, prot=17
    mount.nfs: portmap query failed: RPC: Program not registered
    mount.nfs: requested NFS version or transport protocol is not supported

     

    root@Neptune:/# rpcinfo 10.0.0.13
       program version netid     address                service    owner
        100000    2    tcp       0.0.0.0.0.111          portmapper unknown
        100000    2    udp       0.0.0.0.0.111          portmapper unknown

     

    I tried to mount with nfs version 2...

     

    root@Neptune:/# mount -vvvv -t nfs -o nfsvers=2 LG-NAS:/volume1_public /mnt/remotes/LG-NAS
    mount.nfs: timeout set for Thu Mar  4 09:33:32 2021
    mount.nfs: trying text-based options 'nfsvers=2,addr=10.0.0.13'
    mount.nfs: prog 100003, trying vers=2, prot=6
    mount.nfs: portmap query retrying: RPC: Program not registered
    mount.nfs: prog 100003, trying vers=2, prot=17
    mount.nfs: portmap query failed: RPC: Program not registered
    mount.nfs: trying text-based options 'nfsvers=2,addr=10.0.0.13'
    mount.nfs: prog 100003, trying vers=2, prot=6
    mount.nfs: portmap query retrying: RPC: Program not registered
    mount.nfs: prog 100003, trying vers=2, prot=17
    mount.nfs: portmap query failed: RPC: Program not registered
    mount.nfs: trying text-based options 'nfsvers=2,addr=10.0.0.13'
    mount.nfs: prog 100003, trying vers=2, prot=6
    mount.nfs: portmap query retrying: RPC: Program not registered
    mount.nfs: prog 100003, trying vers=2, prot=17
    mount.nfs: portmap query failed: RPC: Program not registered
    mount.nfs: requested NFS version or transport protocol is not supported

     

     

    No dice.  Anything left for me to try?

     

  8. On 6/1/2020 at 1:06 PM, keiser said:

    Hey folks, just built my first Unraid box with an Intel 10700K, ASUS TUF GAMING Z490-PLUS mobo, 4x WD Elements 8TB, 64 GB RAM (Ballistix Sport LT 16GB x4).

     

    I had some issues getting up and running, neither my mobo onboard NIC nor my USB-C ethernet adapter would work. Only way I could get online was with a super old USB-A 2.0 100 Mbps ethernet adapter. Ordered a USB-A gigabit adapter to see if that works. I also have an ethernet card lying around here somewhere and I may just slap that in, too. I hear the onboard NIC may just start working with the new kernel so I'm not terribly worried about it.

     

    Drives all came up fine, just had to format them. Got the Unassigned Devices/Plus plugins working, got drives all sorted out, assigned to array, everything functional there. Already transferring data from backup drives.

     

    I've tried enabling the onboard iGPU so I can run Plex on Docker. No dice.

     

    BIOS is set to use the CPU GFX.

     

    Go script:

    
    #!/bin/bash
    # Start the Management Utility
    /usr/local/sbin/emhttp &
    
    modprobe i915
    chmod -R 777 /dev/dri

    syslinux.cfg:

    
    default menu.c32
    menu title Lime Technology, Inc.
    prompt 0
    timeout 50
    label Unraid OS
      menu default
      kernel /bzimage
      append initrd=/bzroot i915.alpha_support=1
    label Unraid OS GUI Mode
      kernel /bzimage
      append initrd=/bzroot,/bzroot-gui i915.alpha_support=1
    label Unraid OS Safe Mode (no plugins, no GUI)
      kernel /bzimage
      append initrd=/bzroot unraidsafemode
    label Unraid OS GUI Safe Mode (no plugins)
      kernel /bzimage
      append initrd=/bzroot,/bzroot-gui unraidsafemode
    label Memtest86+
      kernel /memtest

    Appears to show up, but /dev/dri never appears.

    
    root@KeiserUnraid:~# lsmod | grep i915
    i915                 1351680  0
    i2c_algo_bit           16384  1 i915
    iosf_mbi               16384  1 i915
    drm_kms_helper        135168  1 i915
    drm                   348160  2 drm_kms_helper,i915
    intel_gtt              20480  1 i915
    i2c_core               40960  4 drm_kms_helper,i2c_algo_bit,i915,drm
    video                  40960  1 i915
    backlight              16384  2 video,i915
    root@KeiserUnraid:~# chmod 777 /dev/dri
    chmod: cannot access '/dev/dri': No such file or directory

     

    You got this working?  I have same MOBO but with B460 chipset and i had to use 6.9.rc2 for it to recognize my onboard NIC.

  9. I'm trying to run the extended test.  So far, I've done it twice and it just sits there with a spinning circle reporting 10% done for over an hour.  I see the disk is spun up and activity is happening.  My guess is something isn't right or it's hitting every sector and won't update the percentage until after it's done.

  10. Just had some read errors not long after I started a move.      896 read errors just popped up and it's still in a valid disk whereas other times I get 1 read error and it's kicked out.   Attached is the diagnostics.  I don't have any user data on there, there are 55G of space used on there, not sure if there are any valid files.  When I go in there as root from the command line, I do an ls -lia and nothing shows up.  My 30+ years of unix experience tells me there is not really anything there, but you never know.  

    cygnus-diagnostics-20210212-1105.zip

  11. I'm running 6.9-rc2 with 6 8tb Iron Wolf drives and a 10tb parity drive.  Installed on my system is a Broadcom SAS 9300-8i HBA.  I can't go to an earlier version of unRaid because it won't work with my MB.  (6.8.x would not work with my MB NIC).  MB is Asus Tuf Gaming B460M-PLUS.

     

    I originally hooked up all but my parity drive to the HBA and was getting random read errors which would kick drives out of the array.  Sometimes it was 100+ errors before it got kicked out, other times it's 1 error and BAM---  gone!   I posted my diag logs here and was told to upgrade the hba firmware (done) and check the cables.   I upgraded the firmware and changed to different cables while at the same time, moving as many drives as possible to the MB sata connectors.   This has greatly reduced my drive errors and a drive that had tons of errors hasn't had a peep so I'm 99.9% sure it's an HBA issue and not an actual drive issue.   Another interesting item is that this mainly happens at night... as if there is a sequence of events that happens to render the supposed read error.

     

    The remaining HBA drive got kicked out yesterday and I'm tired of always having to rebuild parity so I want to either yank it or just remove the drive from my share.     I'm currently using unBalance to move the data off that drive to my other drives and would rather not have to rebuild the array so my idea was to go into my share definitions and just exclude that drive from all my shares.  My theory is that the drive will just sit there as part of the array as an idle member until I decide on a course of action.  As long as it's idle, there is a much reduced chance of it getting kicked out.

     

    I don't know if my version of unRaid isn't playing nicely with the HBA or if the HBA is just finicky.   Maybe I'll get another HBA?  Any recommendations on a new HBA or just in general?

  12. how do you run this from the gui, or do you?   I've been all over my system and can't find a place to run this.  I've run the fix common problems plugin several times and don't see this as an option.   I see the command above, is that the only way to run it?

     

     

    Nevermind, I found it over in tools.

  13. Got the firmware updated.   I opened the case to add M.2 cache and it seemed like the SAS cable had a subtle click when I checked the connection.  That might've been the problem.

     

    Everything ran fine for about 16 hours and then I got another set of 8 errors on the same drive as yesterday as well as 1 error on a 2nd drive which kicked it immediately out.    Maybe it's the cables, maybe it's the LSI card.   I'm kinda getting the idea it's in the LSI HBA.  I ordered new cables and I'm about to switch to some unused LSI cables.    Gotta wait for the pre-clear to finish on the new drives I added.

     

    TIm.

×
×
  • Create New...