TimV

Members
  • Posts

    31
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TimV's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. I've got an old LG-NAS (L2B2 or similar) that works fine with Windows and MacOS but I cannot mount on unRAID 6.8.3. When going thru the GUI, it just does nothing when I try to mount it. No logs, nothing so I opened up a shell window. When I try to mount from the command line, I get: root@Neptune:/mnt/remotes# mount -t nfs LG-NAS:/volume1_public /mnt/remotes/LG-NAS mount.nfs: requested NFS version or transport protocol is not supported So, I tried nfs version 3... same thing. root@Neptune:/mnt/remotes# mount -vvvv -t nfs -o vers=3 LG-NAS:/volume1_public /mnt/remotes/LG-NAS mount.nfs: timeout set for Thu Mar 4 09:15:53 2021 mount.nfs: trying text-based options 'vers=3,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: trying text-based options 'vers=3,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: trying text-based options 'vers=3,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: requested NFS version or transport protocol is not supported root@Neptune:/# rpcinfo 10.0.0.13 program version netid address service owner 100000 2 tcp 0.0.0.0.0.111 portmapper unknown 100000 2 udp 0.0.0.0.0.111 portmapper unknown I tried to mount with nfs version 2... root@Neptune:/# mount -vvvv -t nfs -o nfsvers=2 LG-NAS:/volume1_public /mnt/remotes/LG-NAS mount.nfs: timeout set for Thu Mar 4 09:33:32 2021 mount.nfs: trying text-based options 'nfsvers=2,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=2, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=2, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: trying text-based options 'nfsvers=2,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=2, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=2, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: trying text-based options 'nfsvers=2,addr=10.0.0.13' mount.nfs: prog 100003, trying vers=2, prot=6 mount.nfs: portmap query retrying: RPC: Program not registered mount.nfs: prog 100003, trying vers=2, prot=17 mount.nfs: portmap query failed: RPC: Program not registered mount.nfs: requested NFS version or transport protocol is not supported No dice. Anything left for me to try?
  2. You got this working? I have same MOBO but with B460 chipset and i had to use 6.9.rc2 for it to recognize my onboard NIC.
  3. Is there some kind of roadmap that shows where unRAID wants to go at a high level? We used to get those once a year from IBM when dealing with Netezza.
  4. ST8000VN004-2M2101_WKD32TB8-20210212-2308.txt
  5. It was finished this morning with no errors.
  6. Many hours later, I'm at 50%. Maybe it'll be done in the morning. lol
  7. I'm trying to run the extended test. So far, I've done it twice and it just sits there with a spinning circle reporting 10% done for over an hour. I see the disk is spun up and activity is happening. My guess is something isn't right or it's hitting every sector and won't update the percentage until after it's done.
  8. Just had some read errors not long after I started a move. 896 read errors just popped up and it's still in a valid disk whereas other times I get 1 read error and it's kicked out. Attached is the diagnostics. I don't have any user data on there, there are 55G of space used on there, not sure if there are any valid files. When I go in there as root from the command line, I do an ls -lia and nothing shows up. My 30+ years of unix experience tells me there is not really anything there, but you never know. cygnus-diagnostics-20210212-1105.zip
  9. No, I rebuilt parity with all original drives included. Disk2 is the black sheep drive. WKD32TB8
  10. Here are my diagnostics, but I shut the system down last night after parity was rebuilt to avoid another surprise this morning. cygnus-diagnostics-20210210-1220.zip
  11. I'm running 6.9-rc2 with 6 8tb Iron Wolf drives and a 10tb parity drive. Installed on my system is a Broadcom SAS 9300-8i HBA. I can't go to an earlier version of unRaid because it won't work with my MB. (6.8.x would not work with my MB NIC). MB is Asus Tuf Gaming B460M-PLUS. I originally hooked up all but my parity drive to the HBA and was getting random read errors which would kick drives out of the array. Sometimes it was 100+ errors before it got kicked out, other times it's 1 error and BAM--- gone! I posted my diag logs here and was told to upgrade the hba firmware (done) and check the cables. I upgraded the firmware and changed to different cables while at the same time, moving as many drives as possible to the MB sata connectors. This has greatly reduced my drive errors and a drive that had tons of errors hasn't had a peep so I'm 99.9% sure it's an HBA issue and not an actual drive issue. Another interesting item is that this mainly happens at night... as if there is a sequence of events that happens to render the supposed read error. The remaining HBA drive got kicked out yesterday and I'm tired of always having to rebuild parity so I want to either yank it or just remove the drive from my share. I'm currently using unBalance to move the data off that drive to my other drives and would rather not have to rebuild the array so my idea was to go into my share definitions and just exclude that drive from all my shares. My theory is that the drive will just sit there as part of the array as an idle member until I decide on a course of action. As long as it's idle, there is a much reduced chance of it getting kicked out. I don't know if my version of unRaid isn't playing nicely with the HBA or if the HBA is just finicky. Maybe I'll get another HBA? Any recommendations on a new HBA or just in general?
  12. how do you run this from the gui, or do you? I've been all over my system and can't find a place to run this. I've run the fix common problems plugin several times and don't see this as an option. I see the command above, is that the only way to run it? Nevermind, I found it over in tools.
  13. TimV

    Drive Read errors

    Got the firmware updated. I opened the case to add M.2 cache and it seemed like the SAS cable had a subtle click when I checked the connection. That might've been the problem. Everything ran fine for about 16 hours and then I got another set of 8 errors on the same drive as yesterday as well as 1 error on a 2nd drive which kicked it immediately out. Maybe it's the cables, maybe it's the LSI card. I'm kinda getting the idea it's in the LSI HBA. I ordered new cables and I'm about to switch to some unused LSI cables. Gotta wait for the pre-clear to finish on the new drives I added. TIm.
  14. TimV

    Drive Read errors

    Thanks for the help. Can you explain a bit more about connection/power problem? I'll get that firmware updated. TIm.
  15. TimV

    Drive Read errors

    cygnus-diagnostics-20210127-0935.zip This is what you're looking for, yes?