Quardzu Posted March 24 Share Posted March 24 (edited) same as above. Typically, after a reboot, the file system of the hard disks mounted in the pool device shows as damaged. Attempts to repair it usually reveal that only a few files in certain directories are damaged. Due to a single file being corrupted in most directories, all files within those directories become useless. Therefore, these files are currently retained as redundancy protection, keeping the directory intact. The WebUI may be accessible for a while before it completely freezes or becomes unresponsive. Once the array starts, it cannot be stopped, and it is not possible to shut down normally; the only option is to power off. However, it seems possible to connect to the web console for short-term operations. About the “Invalid file writing”, actually it's a daily issue, in docker containers, I always saw the downloading task spent 100GB to download a 100M file. Luckly Sony haven't call me about this, but I need to schedule reboots to fix this, but im not sure my filesystem still alive after next reboot LOL. Array can't be stopped; Parity can't be synced; Invalid file writing => Result: Total size:6 TB Elapsed time:7 hours, 20 minutes Current position:268 GB (4.5 %) Estimated speed:102.6 MB/sec Estimated finish:15 hours, 31 minutes quartzs-diagnostics-20240324-1839.zip Edited March 24 by Quardzu Quote Link to comment
itimpi Posted March 24 Share Posted March 24 I would think files being damaged on a regular basis would point to RAM issues. Have you run a memtest recently? Quote Link to comment
Solution Quardzu Posted March 24 Author Solution Share Posted March 24 4 minutes ago, itimpi said: I would think files being damaged on a regular basis would point to RAM issues. Have you run a memtest recently? Thanks, since six months ago(or earlier), I haven't conducted this test. I think I should test it, but I want to wait until the file system recovery is complete, because a few months ago, I upgraded from 64GB to 128GB. Quote Link to comment
itimpi Posted March 24 Share Posted March 24 3 hours ago, Quardzu said: Thanks, since six months ago(or earlier), I haven't conducted this test. I think I should test it, but I want to wait until the file system recovery is complete, because a few months ago, I upgraded from 64GB to 128GB. Adding extra RAM sticks puts extra load on the RAM controller so you can end up with a situation where every RAM stick tests out fine individually, but you get RAM errors when the system is under load with all sticks plugged in. Quote Link to comment
Quardzu Posted March 24 Author Share Posted March 24 1 minute ago, itimpi said: Adding extra RAM sticks puts extra load on the RAM controller so you can end up with a situation where every RAM stick tests out fine individually, but you get RAM errors when the system is under load with all sticks plugged in. I'm conducting a memory test, and so far, it looks okay. Perhaps I need to switch back to 32x2 or lower the memory frequency to 4800 or 4000. Can ECC memory mitigate these kinds of issues? I've found that workstation motherboards and ECC memory don't seem to be that expensive anymore, and my CPU happens to support ECC memory. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.