jetskijoe

Members
  • Posts

    11
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

jetskijoe's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I am sorry I replied back to a post on the announcements and should have replied back to this post. I have an issue where it looks like if NFS is under load the shares are disappearing. The only fix I have found is rebooting unraid. This only started happening when I went to RC3 so I thought maybe RC4 would fix it. Please see the following attached error file: nfsd: non-standard errno: -103 WARNING: CPU: 2 PID 21122 at fs/nfsd/nfsproc.c:817 nfserrno+0x44/0x4a [nfsd] nfserror.txt
  2. I cannot wait for the latest docker. That will really help out everything. I have about 25 dockers running right now and probably will have a few more once the new version comes out and can use some of the new features.
  3. Yeah I tried changing all the setting in ESXi and everything to do with the disks. So I am guess it is a bug with the latest version or some incompatible driver.
  4. I have both pass through and RDM. My addon card I pass through and I am not seeing the issue with it. I am seeing the issue with the RDM. Marcusone: I tried downloading it a few times from different computers and get the same issue. I am not seeing any trouble with my unraid tho everything seems to be working fine. I am to run my parity drive and do a check and everything seems to be fine. Was just something that I noticed.
  5. I am currently running R5 the error seemed to happen between B14 and R2...I was unable to find R1 to see if that was the issue build. Thanks for the help
  6. On ESXi I have unraid running followed the directions on the forum and everything was working great. When I went from B14 to R2 I the green light stayed blinking. attached is a screenshot of the error that I got from the console window. Please let me know if you need anymore information.
  7. It looks to install correctly for me .
  8. you are fine. Apparently, 34 sectors were re-allocated before you performed the preclear, and the same 34 existed afterwords. In other words, no additional were detected. I'd go ahead and use the drive, but monitor it over the next months/years. I've got several old drives where an initial number of re-allocated sectors does not change, and they work perfectly fine. If you have time, give it another preclear cycle. I hate to ask this but I searched and could find it but what type of errors am I looking for. I have another 10 drives to do and I don't want to keep bugging you. Is there anything special I should be looking at? Or is ok to just post them for someone to take a look at? Thanks for the quick response. jets
  9. Is this ok? = Disk Post-Clear-Read completed DONE Disk Temperature: 36C, Elapsed Time: 18:39:50 ========================================================================1.12 == ST31500341AS 9VS922T1 == Disk /dev/sdb has been successfully precleared == with a starting sector of 64 ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdb /tmp/smart_finish_sdb ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 106 112 6 ok 12389767 Spin_Retry_Count = 100 100 97 near_thresh 20 End-to-End_Error = 100 100 99 near_thresh 0 High_Fly_Writes = 1 1 0 near_thresh 253 Airflow_Temperature_Cel = 64 65 45 near_thresh 36 Temperature_Celsius = 36 35 0 ok 36 Hardware_ECC_Recovered = 58 35 0 ok 12389767 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 34 sectors had been re-allocated before the start of the preclear. 34 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. sorry if I should have attached.