bsim

Members
  • Posts

    191
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bsim's Achievements

Explorer

Explorer (4/14)

1

Reputation

  1. But will a drive be flagged/emulated by unraid (Red X) because of too many uncorrectables? Or does there have to be something else wrong with the drive?
  2. Scratching my head on this one...unraid fluke? Dual parity 12TB, 6.12.6, all XFS drives, monthly checks, 0 errors on last parity check. 1. I precleared a 5TB that I've had as a hot spare for my array, 0 errors, shows in unassigned devices. 2. Removed Smart Failed/But Still Good 4TB from array (10 Uncorrectables) 3. Replaced With Pre-cleared 5TB 4. Started array with Rebuild, 0 Errors on rebuild 5. Red X a different 5TB drive!? ****The drive does have several uncorrectables**** Will a drive red X and emulate if it has too many uncorrectables? And not show any errors in the rebuild?
  3. I've read extensively on the forums and on google, but haven't found someone with my exact, relatively simple interests... I have multiple large external hard drives (22TBx) and I have extensive scripts (using rsync) that do automatic backups when powered on (unassigned devices script) every month or so. My question is, what drive format should I use for the external drives? ***I would love to turn my current backup (based-on-changed-files) to verify integrity of backed up files based on FS checksums that would refresh files if bitrot occurs. I'm not extremely worried about accessing the drives from outside machines (windows/ntfs), and definitely don't have any computational limitations. I'm thinking BTRFS and ZFS would be my best bet, but: Can rsync backups use the FS checksumming built into these file systems to determine differences? (better option than rsync?) Does FS checksumming in BTRFS/ZFS happen automatically in the background without complications (outside of the unassigned devices gui)? Is scriptable command line FS checking easy for BTRFS/ZFS? (currently using xfs_repair for XFS)
  4. I see the point for being careful with automatic parity corrections, but with how stable my system is hardware wise, it's worked for years flawlessly. Just every once in a while i get a burst of sync errors on a 140TB array, the number of errors don't seem like a major issue vs the potential problems automatic parity correction would save me from. I considered installing some type of indexing/checksum software to watch for any type of bit rot or actual corruption...just haven't got around to it. It would be awesome if there was a way to translate the location of incorrect bits to at least a controller/drive/file...would help greatly in my case. I don't see why the main unraid driver wouldn't be able to spit out the details of the parity issue when doing corrections, seems like it would be a great diagnostic tool.
  5. Are corrected parity sync errors truly corrected or will there be some sort of hidden corruption? The errors are not recurring and I often go several months/checks with no errors detected. The hardware has been stable/unchanged for years now. If I can't determine the issue using smart, obvious unraid errors or any log files then why wouldn't a correcting parity check just save me time?
  6. Running the latest unraid pro 6.10.3 with dual parity using mirrored SSD's (240GB, mover nightly) as cache drives... I run a large array and run parity checks automatically every month. Most times I get no parity errors. But sometimes get a few thousand corrected parity errors. I have a ups that does a graceful shutdown (but i guess it's possible that the shutdown process takes longer than the ups power could hold out for waiting for unraid to shutdown the array). I do have power outages, but the UPS that can stand 20 minutes of run time before it tells unraid to issue a shut down. No drives have red balls or have any issues with SMART 5, 187, 188, 197 or 198 (Backblaze recommended) The physical server has not been moved/opened in several months. Two questions: 1.) What files in the diagnostics download (saved immediately after sync errors) would show me what files/drives reported the sync errors? What am I looking for in the files that would be able to tell me the details? 2.) Do corrected sync parity errors (with dual parity) mean that the data was corrected and no corruption has occurred?
  7. Is there a place for feature requests? Specifically, 1.) Creating a button to easily archive or delete log files 2.) A button to enable autoscroll when viewing the log...Really handy when clicking on the refresh button to watch the scripts progress. 3.) Easy way to specify log file location (storing on the flash drive is handy)
  8. 6.9.2 is the version I found it starting as well...From the research it seems like a common bug in SMBD...can anyone confirm this or if it's due to client requests vs server?
  9. While reviewing syslog, I found thousands of these repeating lines for several months now...no real problems with the server, but I can't find any type of explanation/others having the same problem anywhere to see if this is a client connection issue or a server side issue. Nov 1 03:23:11 UNRAID smbd[17025]: [2021/11/01 03:23:11.874565, 0] ../../source3/smbd/smbXsrv_client.c:663(smbXsrv_client_connection_pass_loop) Nov 1 03:23:11 UNRAID smbd[17025]: smbXsrv_client_connection_pass_loop: got connection sockfd[847] Anyone have any ideas?
  10. I get it, but I can't figure out why the data would be dumped to a file when it started on an actual device that was connected...shouldn't it just error out and disappear completely from the dev folder?
  11. I would think that if the drive wasn't part of the array (abruptly disconnected), it would have just errored out...If I would have not started from basics to figure out why the drive was being wonky, I could have sworn it was a very specific hard drive defect. The fact that the utility was writing data directly to the /dev folder and it was being written as a file rather than a device is really screwy. All three drives are now pre-clearing without issue.
  12. Nevermind...It has to be a bug in the preclear plugin. by chance ssh'ed (WinSCP) to unraid, and found that the /dev/sdac was actually a 30GB file. Deleted the file, unplugged and replugged sata connection for that drive and VOILLA! My problem must be with a bug or unhandled case in the preclear script, previously, when I was preclearing the drives, I had unplugged one of them in mid preclear...and even after attempting to stop the preclear on that drive, it must have choked on something. After wiping out that /dev file, this came up in the preclear!
  13. Racking my brain on this one... 3 new 12TB drives (identical shucked drives) (they do not have disable pin) verified smart ok seperate windows machine BEFORE shuck (HD Tune Pro), ALL THREE OK verified smart ok seperate windows machine AFTER shuck (HD Tune Pro), ALL THREE OK wiped all partitions, ran read and write tests for all three, no problems attached 3 drives to server, preclear plugin identified 2 fine, 3rd drive does not show "identity", only "_" (Attached pics) switched sata connection with one of the working drives, 3rd drive still does not identify switched power connection, 3rd drive still does not identify from command line, attempted to manually pull smart (smartctl -H /dev/sdac) 2 drives results passed, possible bad drive "INQUIRY failed" Is this just an oddly bad drive or is there some sort of cached drive data going wonky?
  14. If a docker app stores it's data in the appdata location, what else would be stored in the docker image?