Yes, if the drive errors on reading the data back and thus you know which drive is a problem, that drive's data can be restored from parity (I run two parity); however, the silent type of data error would not be recoverable since you would not know which drive(s) have invalid data -- all you would know is that something may be wrong during array party checking/verification, but not able to recover without knowing which bit is trustworthy from which drive.
I am very surprised to hear you handle large quantities of data without encountering this silent data corruption -- I have encountered it myself a fair few times in video files getting corruption out of nowhere, zip and 7z archives which no longer extract at all, and others. Regardless of your and my own personal experience, silent data corruption is a known and studied phenomenon -- it's not something I'm making up. If you have any doubt regarding this, read the details about "silent data corruption" here: https://queue.acm.org/detail.cfm?id=1866298
Maybe you value your write speeds, but I would gladly give up 50% of my write speed (or even 75%) to get read-back-verification on the cache mover in unRAID. It would be a checkbox and be optional anyway, so you could just opt to not use it. Also keep in mind you wouldn't lose performance as long as you're not writing more than your cache pool's data size. This would be the mover taking some extra time in the middle of the night to move these files.
I would really like to see someone from unRAID staff chime in on this -- I want this on the roadmap or a good reason why it's not needed (which I don't believe exists). I'm assuming the mover is some kind of simple application like linux cp or windows xcopy (which has a /v flag to do read-back-verify). This shouldn't be too hard to implement.