TimV

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by TimV

  1. I have a problem with binhex_plex not recognizing all my music files. I'm running 2 Unraid systems and this appears to be only happening on system #2, but I'm afraid to remove and re-add for fear of them not coming back. Both systems running the latest version of Unraid (6.12) and binhex_plex docker. System #1 - Music was loaded over a year ago and seems to have no problems. This would imply there is nothing wrong with the mp3 files themselves or the naming format. It could be possible, Plex changed since I added the files, but I use the same naming convention for all my music so I wonder how it could be fine for some and not others. System #2 - I noticed earlier this year some of my music was not listed in Plex. I did the folder listing and the folder was present from within Plex. No issues accessing the files from the OS. Here is a list of debugging steps I took, all of which did not work. - I copied the files out of directory and then back in. - I checked and updated the mp3 metadata. - Renamed Directory and mp3 files. - Various shut-downs of Plex docker and Unraid itself. - Created new Library with different path and copied files into it. - Installed the binhex plexpass docker and it wouldn't show up there. - Removed all plex dockers, did a rm -rf in the appdata directories and re-installed binhex-plex. - note: this made it worse. Now, only 5% of my music is showing up. The scan files operation finishes really quickly. I'm at a loss as I've used Plex and these files (as is) for the last 6-7 years with no issues at all. I thought nuking them and starting over would remove any corruption problems that might be occurring. The scan files operation seems to be passing right over them.
  2. Unassigned Disks (UD) shares are no longer showing up on MacOS (10.15.7) or WIndows 10. This started before I upgraded from 6.11.1 to 6.11.5 so I think it's related to UD. For the last year, I've had no problem with UD shares (I have no problems seeing actual share directories) and then one day they magically disappeared. I've rebooted both my mac and unraid server multiple times. I've gone in and unshared them, hit apply and then re-did the share. My /etc/exports directory has them listed. I think it's something related to UD because Windows can no longer see them , either so it's not just a Mac thing. I have the latest version of UD which is dated for 3/3/23. Note: I have no problems accessing these disks from the Unraid server, so it looks like the mount is good, but the network component is hosed. SOLVED: I guess 'enable SMB Security' got changed in the UD settings or functionality changed. I set if from 'off' to 'Public' and my shares showed up again.
  3. This may sound stupid, but how do I use this? I don't see any instructions in the project other than how to get your GitHub token, which I got. I start the docker, get into the console and have no idea what to run to make this work. I went thru the posts hoping I'd see a similar post, but to no avail. Any guidance on getting started would be appreciated. Thanks.
  4. That was the problem. I saw it earlier when I was fiddling with my settings, but wasn't sure. It works like a champ now. THanks.
  5. I had a VM (Windows 10) I could connect to via RDP with no problems and when I upgraded my motherboard, I'm no longer able to do so. After reconfiguring the VM to start, I can only connect thru VNC. The VM still has network and internet access. but has a strange IP address. Instead of having a 192.168.0.x IP, it has a 192.168.122.x address. If I try to use that IP address, it will not connect to RDP. When I view RDP settings in the VM, they are are unchanged from before. It says I should be able to connect by its name (Proteus). Any suggestions as to why this is no longer working? I went into device mangler and updated the network drivers, no dice. I can mount devices that host on my unraid server, but am unable to mount any non-unraid network storage to the VM. Any suggestions?
  6. Thanks. I was trying to use a ASUS Z590-A and was not able to boot. The suggestions in this thread helped me to get it to boot, however, it wants to do an Intel network boot first despite the USB being first in the chain.
  7. Yes, that is what it was. I have no idea why it turned on because I wasn't using in 6.10. Everything works now.
  8. I'm thinking it might've turned Wireguard on and set it to autostart. I went back to 6.10.3 and everything worked again. I'm trying upgrade again.
  9. Upgrade to 6.11.1 breaks DNS. I cannot change anything in the networking section of settings. Attached is my diag output. neptune-diagnostics-20221016-1030.zip
  10. Everything was working fine, but after upgrade, my box cannot access the internet. I can login locally via network, but it can't communicate. OpenVPN reports DNS error, my Fix Common Problems pops up saying i have DNS problem, but I can't find it. Unraid itself cannot communicate back to GitHub. i'm good on intranet activity, but not internet. Attached is the output of my diagnostics run. I'm about to restore back to 6.10.3. I ran the pre-upgrade check and it passed on everything. neptune-diagnostics-20221016-1030.zip
  11. I'm trying to use AzireVPN. I downloaded the config file, uploaded it to Wireguard. When I switch to advanced mode, I see their server name in there. VPN tunneled access is the type I'm trying to setup. I can ping their server from the config screen. When I activate it, the "last handshake" line shows inactive connection. When I leave the screen and go back in, it's inactive. I must be forgetting something fairly basic, yet I'm seeing anything in this thread, It might be there and I just don't recognize it.
  12. I've got an old LG-NAS (L2B2 or similar) that works fine with Windows and MacOS but I cannot mount on unRAID 6.8.3. When going thru the GUI, it just does nothing when I try to mount it. No logs, nothing so I opened up a shell window. When I try to mount from the command line, I get: root@Neptune:/mnt/remotes# mount -t nfs LG-NAS:/volume1_public /mnt/remotes/LG-NAS mount.nfs: requested NFS version or transport protocol is not supported So, I tried nfs version 3... same thing. root@Neptune:/mnt/remotes# mount -vvvv -t nfs -o vers=3 LG-NAS:/volume1_public /mnt/remotes/LG-NAS mount.nfs: timeout set for Thu Mar 4 09:15:53 2021 mount.nfs: trying text-based options 'vers=3,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: trying text-based options 'vers=3,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: trying text-based options 'vers=3,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: requested NFS version or transport protocol is not supported root@Neptune:/# rpcinfo 10.0.0.13 program version netid address service owner 100000 2 tcp 0.0.0.0.0.111 portmapper unknown 100000 2 udp 0.0.0.0.0.111 portmapper unknown I tried to mount with nfs version 2... root@Neptune:/# mount -vvvv -t nfs -o nfsvers=2 LG-NAS:/volume1_public /mnt/remotes/LG-NAS mount.nfs: timeout set for Thu Mar 4 09:33:32 2021 mount.nfs: trying text-based options 'nfsvers=2,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=2, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=2, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: trying text-based options 'nfsvers=2,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=2, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=2, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: trying text-based options 'nfsvers=2,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=2, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=2, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: requested NFS version or transport protocol is not supported No dice. Anything left for me to try?
  13. You got this working? I have same MOBO but with B460 chipset and i had to use 6.9.rc2 for it to recognize my onboard NIC.
  14. Is there some kind of roadmap that shows where unRAID wants to go at a high level? We used to get those once a year from IBM when dealing with Netezza.
  15. ST8000VN004-2M2101_WKD32TB8-20210212-2308.txt
  16. It was finished this morning with no errors.
  17. Many hours later, I'm at 50%. Maybe it'll be done in the morning. lol
  18. I'm trying to run the extended test. So far, I've done it twice and it just sits there with a spinning circle reporting 10% done for over an hour. I see the disk is spun up and activity is happening. My guess is something isn't right or it's hitting every sector and won't update the percentage until after it's done.
  19. Just had some read errors not long after I started a move. 896 read errors just popped up and it's still in a valid disk whereas other times I get 1 read error and it's kicked out. Attached is the diagnostics. I don't have any user data on there, there are 55G of space used on there, not sure if there are any valid files. When I go in there as root from the command line, I do an ls -lia and nothing shows up. My 30+ years of unix experience tells me there is not really anything there, but you never know. cygnus-diagnostics-20210212-1105.zip
  20. No, I rebuilt parity with all original drives included. Disk2 is the black sheep drive. WKD32TB8
  21. Here are my diagnostics, but I shut the system down last night after parity was rebuilt to avoid another surprise this morning. cygnus-diagnostics-20210210-1220.zip
  22. I'm running 6.9-rc2 with 6 8tb Iron Wolf drives and a 10tb parity drive. Installed on my system is a Broadcom SAS 9300-8i HBA. I can't go to an earlier version of unRaid because it won't work with my MB. (6.8.x would not work with my MB NIC). MB is Asus Tuf Gaming B460M-PLUS. I originally hooked up all but my parity drive to the HBA and was getting random read errors which would kick drives out of the array. Sometimes it was 100+ errors before it got kicked out, other times it's 1 error and BAM--- gone! I posted my diag logs here and was told to upgrade the hba firmware (done) and check the cables. I upgraded the firmware and changed to different cables while at the same time, moving as many drives as possible to the MB sata connectors. This has greatly reduced my drive errors and a drive that had tons of errors hasn't had a peep so I'm 99.9% sure it's an HBA issue and not an actual drive issue. Another interesting item is that this mainly happens at night... as if there is a sequence of events that happens to render the supposed read error. The remaining HBA drive got kicked out yesterday and I'm tired of always having to rebuild parity so I want to either yank it or just remove the drive from my share. I'm currently using unBalance to move the data off that drive to my other drives and would rather not have to rebuild the array so my idea was to go into my share definitions and just exclude that drive from all my shares. My theory is that the drive will just sit there as part of the array as an idle member until I decide on a course of action. As long as it's idle, there is a much reduced chance of it getting kicked out. I don't know if my version of unRaid isn't playing nicely with the HBA or if the HBA is just finicky. Maybe I'll get another HBA? Any recommendations on a new HBA or just in general?
  23. how do you run this from the gui, or do you? I've been all over my system and can't find a place to run this. I've run the fix common problems plugin several times and don't see this as an option. I see the command above, is that the only way to run it? Nevermind, I found it over in tools.
  24. TimV

    Drive Read errors

    Got the firmware updated. I opened the case to add M.2 cache and it seemed like the SAS cable had a subtle click when I checked the connection. That might've been the problem. Everything ran fine for about 16 hours and then I got another set of 8 errors on the same drive as yesterday as well as 1 error on a 2nd drive which kicked it immediately out. Maybe it's the cables, maybe it's the LSI card. I'm kinda getting the idea it's in the LSI HBA. I ordered new cables and I'm about to switch to some unused LSI cables. Gotta wait for the pre-clear to finish on the new drives I added. TIm.
  25. TimV

    Drive Read errors

    Thanks for the help. Can you explain a bit more about connection/power problem? I'll get that firmware updated. TIm.